stravanasu
  • 2 Posts
  • 44 Comments
Joined 1Y ago
cake
Cake day: Jul 05, 2023

help-circle
rss

these autonomous agents represent the next step in the evolution of large language models (LLMs), seamlessly integrating into business processes to handle functions such as responding to customer inquiries, identifying sales leads, and managing inventory.

I really want to see what happens. It seems to me these “agents” are still useless in handling tasks like customer inquiries. Hopefully customers will get tired and switch to companies that employ competent humans instead…



The current security philosophy almost seems to be: “In order to make it secure, make it difficult to use”. This is why I propose to go a step further: “In order to make it secure, just don’t make it”. The safest account is the one that doesn’t exist or that can’t be accessed by anyone, including its owner.


We aren’t supposed to accept that. We can simply not use their software. And as users that’s the only power we have on devs. But it’s a power that only works on devs who are interested in having many users.


Which can be further summarized: academics (🙋🏻) are basically a bunch of idiotic sheep, despite being in academia.

See also https://pluralistic.net/2024/08/16/the-public-sphere/#not-the-elsevier


Yeah to me too. I’m not clicking on that “Download client” link for sure.



You brought back memories and I got interested. Interesting reading about privacy:

https://www.irchelp.org/security/privacy.html

How much of it is true?



Agree (you made me think of the famous face on Mars). I mean that more as a joke. Also there’s no clear threshold or divide on one side of which we can speak of “human intelligence”. There’s a whole range from impairing disabilities to Einstein and Euler – if it really makes sense to use a linear 1D scale, which very probably doesn’t.


Title:

ChatGPT broke the Turing test

Content:

Other researchers agree that GPT-4 and other LLMs would probably now pass the popular conception of the Turing test. […]

researchers […] reported that more than 1.5 million people had played their online game based on the Turing test. Players were assigned to chat for two minutes, either to another player or to an LLM-powered bot that the researchers had prompted to behave like a person. The players correctly identified bots just 60% of the time

Complete contradiction. Trash Nature, it’s become only an extremely expensive gossip science magazine.

PS: The Turing test involves comparing a bot with a human (not knowing which is which). So if more and more bots pass the test, this can be the result either of an increase in the bots’ Artificial Intelligence, or of an increase in humans’ Natural Stupidity.


This is so cool! Not just the font but the whole process and study. Please feel free to cross-post to Typography & fonts.


You’re simplifying the situation and dynamics of science too much.

If you submit or share a work that contains a logical or experimental error – it says “2+2=5” somewhere – then yes, your work is not accepted, it’s wrong, and you should discard it too.

But many works have no (visible) logical flaws and present hypotheses within current experimental errors. They explore or propose, or start from, alternative theses. They may be pursued and considered by a minority, even a very small one, while the majority pursues something else. But this doesn’t make them “rejected”. In fact, theories followed by minorities periodically have breakthroughs and suddenly win the majority. This is a vital part of scientific progress. Except in the “2+2=5” case, it’s a matter of majority/minority, but that does emphatically not mean acceptance/rejection.

On top of that, the relationship between “truth” and “majority” is even more fascinatingly complex. Let me give you an example.

Probably (this is just statistics from personal experience) the vast majority of physicists would tell you that “energy is conserved”. A physicist specialized in general relativity, however, would point out that there’s a difference between a conserved quantity (somewhat like a fluid) and a balanced quantity. And energy strictly speaking is balanced, not conserved. This fact, however, creates no tension: if you have a simple conversation – 30 min or a couple hours – with a physicist who stated that “energy is conserved”, and you explain the precise difference, show the equations, examine references together etc, that physicist will understand the clarification and simply agree; no biggie. In situations where that physicist works, this results in little practical difference (but obviously there are situations where the difference is important.)

A guided tour through general relativity (see this discussion by Baez as a starting point, for example) will also convince a physicist who still insisted that energy is conserved even after the balance vs conservation difference was clarified. With energy, either “conservation” makes no sense, or if we want to force a sense, then it’s false. (I myself have been on both sides of this dialogue.)

This shows a paradoxical situation: the majority may state something that’s actually not true – but the majority itself would simply agree with this, if given the chance! This paradoxical discrepancy arises especially today owing to specialization and too little or too slow osmosis among the different specialities, plus excessive simplification in postgraduate education (they present approximate facts as exact). Large groups maintain some statements as facts simply because the more correct point of view is too slow to spread through their community. The energy claim is one example, there are others (thermodynamics and quantum theory have plenty). I think every physicist working in a specialized field is aware about a couple of such majority-vs-truth discrepancies. And this teaches humbleness, openness to reviewing one’s beliefs, and reliance on logic, not “majorities”.

Edit: a beautiful book by O’Connor & Weatherall, The Misinformation Age: How False Beliefs Spread, discusses this phenomenon and models of this phenomenon.


Peer review, as the name says, is review, not “acceptance”. At least in principle, its goal is to help you check whether the logic behind your analysis is sound and your experiments have no flaws. That’s why one can find articles with completely antithetical results or theses, both peer-reviewed (and I’m not speaking of purchased pseudo peer-review). Unfortunately it has also become a misused political or business tool, that’s for sure – see “impact factors”, “h-indexes”, and similar bulls**t.


That’s how I interpret it. My question is if it’s generally interpreted that way, or misinterpreted.



True that! and a change from 2% to 5% may feel much larger than that.


One aspect that I’ve always been unsure about, with Stack Overflow, and even more with sibling sites like Physics Stack Exchange or Cross Validated (stats and probability), is the voting system. In the physics and stats sites, for example, not rarely I saw answers that were accepted and upvoted but actually wrong. The point is that users can end up voting for something that looks right or useful, even if it isn’t (probably less the case when it comes to programming?).

Now an obvious reply to this comment is “And how do you know they were wrong, and non-accepted ones right?”. That’s an excellent question – and that’s exactly the point.

In the end the judge about what’s correct is only you and your own logical reasoning. In my opinion this kind of sites should get rid of the voting or acceptance system, and simply list the answers, with useful comments and counter-comments under each. When it comes to questions about science and maths, truth is not determined by majority votes or by authorities, but by sound logic and experiment. That’s the very basis from which science started. As Galileo put it:

But in the natural sciences, whose conclusions are true and necessary and have nothing to do with human will, one must take care not to place oneself in the defense of error; for here a thousand Demostheneses and a thousand Aristotles would be left in the lurch by every mediocre wit who happened to hit upon the truth for himself.

For example, at some point in history there was probably only one human being on earth who thought “the notion of simultaneity is circular”. And at that time point that human being was right, while the majority who thought otherwise were wrong. Our current education system and sites like those reinforce the anti-scientific view that students should study and memorize what “experts” says, and that majorities dictate what’s logically correct or not. As Gibson said (1964): “Do we, in our schools and colleges, foster the spirit of inquiry, of skepticism, of adventurous thinking, of acquiring experience and reflecting on it? Or do we place a premium on docility, giving major recognition to the ability of the student to return verbatim in examinations that which he has been fed?

Alright sorry for the rant and tangent! I feel strongly about this situation.


Thank you! never heard of, it looks very interesting!


A repository of often (or at least not seldom) outdated answers.



Musk’s attitude is “It’s mine, I can do whatever I please”. In the long run a person’s reply to this attitude is “Fair enough, keep it. I’ll use something else”. Like I and many others have.


Understandably, it has become an increasingly hostile or apatic environment over the years. If one checks questions from 10 years ago or so, one generally sees people eager to help one another.

Now they often expect you to have searched through possibly thousands of questions before you ask one, and immediately accuse you if you missed some – which is unfair, because a non-expert can often miss the connection between two questions phrased slightly differently.

On top of that, some of those questions and their answers are years old, so one wonders if their answers still apply. Often they don’t. But again it feels like you’re expected to know whether they still apply, as if you were an expert.

Of course it isn’t all like that, there are still kind and helpful people there. It’s just a statistical trend.

Possibly the site should implement an archival policy, where questions and answers are deleted or archived after a couple of years or so.


I share and promote this attitude. If I must be honest it feels a little hopeless: it seems that since the 1970s or 1980s humanity has been going down the drain. I fear “fediverse wars”. It’s 2023 and we basically have a World War III going on, illiteracy and misinformation steadily increase, corporations play the role of governments, science and scientific truth have become anti-Galilean based on “authorities” and majority votes, and natural stupidity is used to train artificial intelligence. I just feel sad.

But I don’t mean to be defeatist. No matter the chances we can fight for what’s right.


Maybe my comment wasn’t clear or you misread it. It wasn’t meant to be sarcastic. Obviously there’s a problem and we want (not just need) to do something about it. But it’s also important to be careful about how the problem is presented - and manipulated - and about how fingers are pointed. One can’t point a finger at “Mastodon” the same way one could point it at “Twitter”. Doing so has some similarities to pointing a finger at the http protocol.

Edit: see for instance the comment by @while1malloc0@beehaw.org to this post.


I’m not fully sure about the logic and perhaps hinted conclusions here. The internet itself is a network with major CSAM problems (so maybe we shouldn’t use it?).


The title of the post is incorrect. It should be: “Elon Musk rebrands Twitter as the X11 Window System”.


In my case this translates to “Twitter is now deleted”.


There are surely pros and cons, possibly good and possibly bad outcomes with such restrictions, and the whole matter is very complicated.

From my point of view part of the problem is the decline of education and of teaching rational and critical thinking. Science started when we realized and made clear that truth – at least scientific truth – is not about some “authority” (like Aristotle) saying that things are so-and-so, or a majority saying that things are so-and-so. Galilei said this very clearly:

But in the natural sciences, whose conclusions are true and necessary and have nothing to do with human will, one must take care not to place oneself in the defense of error; for here a thousand Demostheneses and a thousand Aristotles would be left in the lurch by every mediocre wit who happened to hit upon the truth for himself.

The problem is that today we’re relegating everything to “experts”, or more generally, we’re expecting someone else to apply critical thinking in our place. Of course this is unavoidable to some degree, but I think the situation could be much improved from this point of view.


I don’t know what you have in mind with “trustworthy”, and about what, so maybe this comment is worthless for you. But I’ve been using their cloud storage for several years (like other commenters here), for work-related files, and to sync them between computers and phone. Their syncing system and apps are actually great. No complaints on my part.


Light is faster than… light!?
This insightful blog post seems to refer to [this article](https://www.pcgamer.com/your-next-router-could-be-a-lightbulb-ultra-fast-li-fi-tech-just-took-a-major-step-toward-mass-market-availability). I hope the article is an isolated case. Although it's undeniable that scientific illiteracy is spreading.
fedilink

Mathematical language is a language, but mathematics is not just a language. It is a structure with internal rules that are not determined by pure convention (as natural languages are). We could internationally agree from tomorrow to call “blue” whatever it’s now called “red” and vice versa, but we couldn’t agree to say that “2 + 2 = 5”, because that would lead to internal inconsistencies (we could agree to use the symbol “5” for 4, but that’s a different matter).

This is also related to a staple of science: that scientific and mathematical truth is not determined by a majority vote, but by internal consistency. Indeed modern science started with this very paradigm shift. Quoting Galilei:

But in the natural sciences, whose conclusions are true and necessary and have nothing to do with human will, one must take care not to place oneself in the defense of error; for here a thousand Demostheneses and a thousand Aristotles would be left in the lurch by every mediocre wit who happened to hit upon the truth for himself.

If we want to train an algorithm to infer rules from language, we need to give samples of language where the rules are obeyed strictly (and yet this may not be enough). Otherwise the algorithm will wrongly generalize that the rules aren’t strict (in fact it’ll just see a bunch of mutually inconsistent examples). Which is what happens with ChatGPT.

Edit: On top of this, Gödel’s theorem and other related theorems have shown that mathematical reasoning cannot be reduced to pure symbol manipulation, Hilbert’s unfulfilled dream. So one can’t infer mathematical reasoning from language patterns. Children learn reasoning not only through language training, but also through behaviour training (this was pointed out by Turing). This is why large language models have intrinsic limitations in what they can achieve and be used for.



My point was that a coffee machine is designed to make coffee, not to keep track of time. Maybe it always takes roughly the same amount of time to make a coffee, and so someone uses it as a proxy stopwatch. But it can very well suddenly take more or less time, without anything being wrong about it – maybe different coffee brands, cleaned pipes, or whatnot.

ChatGPT is an algorithm designed to parrot language, not to perform mathematical reasoning based on logic rules.


For me it’s like using a coffee machine as a stopwatch, and then complaining that it doesn’t always give the exact time lapsed.


The number of people protesting against them in their “Issues” page is amazing. The devs have now blocked the creation of new issue tickets or of comments in existing ones.

It’s funny how in the “explainer” they present this as something done for the “user”, when it’s clearly not developed for the “user”. I wouldn’t accept something like this even if it was developed by some government – even less by Google.

I have just reported their repository to GitHub as malware, as an act of protest, since they closed the possibility of submitting issues or commenting.



Thank you, I read that, as I mention in the post, but the colours they list in the help section don’t match the ones I see (as background colour to the whole row): never seen orange or grey, and I see white instead.


Racism, homophobia, and similar phenomena often come from ignorance and from living within a bubble. But many responses to them also show clear signs of ignorance and living within a bubble…



What’s the meaning of Nyaa’s entry colours?
There are two kinds of colours that appear in each torrent entry in Nyaa's listings: - One for the rectangle in the "Category" column. I see many different colours there: purple, red, dark and light grey, green, orange, dark and light yellow... - One for the whole row. Here I've only seen three different colours so far: white, green, red. Do these colours, especially the second, mean anything? Nyaa's Help page mentions the meaning of *four* "torrent colours": green, red, orange, grey. But they don't say where these colours appear. If they mean the row colour, then I've never seen an orange or grey one. So I'm very confused. Maybe the Help page is outdated? OK, not a life-or-death matter, but I've been curious about this for a long time...
fedilink