• 0 Posts
  • 48 Comments
Joined 1Y ago
cake
Cake day: Jun 17, 2023

help-circle
rss

I create proper libraries. I don’t do snippets because they make code dirty, redundant and difficult to read on the long run.

I actively discourage people in my team to use snippets copy and pasted everywhere themselves. If it’s reusable code, it should be usable by everyone and well tested


This was my immediate reaction as well.

For those who like living a messy life, there’s always Visual Studio (the original beast, not VSCode)


VSCode supports it also for other shells. This repo is not about vscode, it’s about actual shells. You are the one incorrect in this case

When someone brings points in the discussion, you react like a fan boy student that just bought the new gaming laptop.

Could you please reply about the discussion or go back to school? I am too old for your “no, you shit, you stoopid”

I wonder myself why I keep answering to your comments and that’s why this is my last comment


Unfortunately… That is why I commented above…

Powershell is a shell that pretends to be an OO language and that fails dramatically at both.

It was a design mistake. Much better having a real separation, a real shell and a real OO language. As even Microsoft has recently understood. As you can see also in this case, where Powershell is the last entry


What is a real advantage compared to plain autocomplete? Was it trained to know command flags?

It is anyway nice to see that even in Microsoft the abomination of Powershell is the lowest in the list of shells. It is probably time for them to drop it completely


ML/AI is a huge field. If you don’t like 1 side of it, there are millions others.


ML/AI. Everything else has been eaten by agile/product owners/MBAs/micromanagement. Luckily those people still don’t understand AI… And AI is still stochastic, pi planning and burnout charts don’t work.

If agile bites your ass… Run away, you are too young for wasting your life


All scientific computing is built on top of fortran. Even cutting edge AI runs on top of high performance libraries written in fortran and c. Simply there is less need for fortran developers because high performance subroutines are wrapped to be called by higher level languages, such as python


First time I see “cool” and “c#” on the same sentence. I’ve always thought stereotype of c# is that it is the language for corporate, extremely uncool projects.

Just a comment. Cobol nowadays is heavily outsourced. There are jobs but not so lucrative as in the past. Fortran is still strong in scientific computing, but nowadays it is wrapped in python. All people I know who were strong in fortran (me included) are nowadays mostly working with python or scala, most of us on ML/AI related stuff.


Mine was a comment to say that llms are not just fancy auto complete. Although technically an evolution, it is a bit like saying humans are fancy worms because evolved from worms


Common Reinforcement learning methods definitely are.

LLMs are an evolution of a markov chain as any method that is not a markov chain… I would say not directly. Clearly they share concepts as any method to simulate stochastic processes, and LLMs definitely are more recent than markov processes. Then anyone can decide the inspirations.

What I wanted to say is that, really, we are discussing about a unique new method for LLMs, that is not just “old stuff, more data”.

This is my main point.


A markov chain models a process as a transition between states were transition probabilities depends only on the current state.

A LLM is ideally less a markov chain, more similar to a discrete langevin dynamics as both have a memory (attention mechanism for LLMs, inertia for LD) and both a noise defined by a parameter (temperature in both cases, the name temperature in LLM context is exactly derived from thermodynamics).

As far as I remember the original attention paper doesn’t reference markov processes.

I am not saying one cannot explain it starting from a markov chain, it is just that saying that we could do it decades ago but we didn’t have the horse power and the data is wrong. We didn’t have the method to simulate writing. We now have a decent one, and the horse power to train on a lot of data


We do. I pay to work with it, I want it to do what I want, even if wrong. I am leading.

Same for all professionals and companies paying for these models


It’s a bit like saying a human being is a fancy worm. Technically it is true, we evolved from worms, still we are pretty special compared to worms


LLMs are not markovian, as the new word doesn’t depend only on the previous one, but it depends on the previous n words, where n is the context length. I.e. LLMs have a memory that makes the generation process non markovian.

You are probably thinking about reinforcement learning, which is most often modeled as a markov decision process


You can easily build one yourself. Check llamaindex and langchain for prebacked solutions. Otherwise the math is pretty trivial if you are using normalized embeddings. You can quickly do one with numpy.

My suggestion anyway is to start with langchain, full of tutorials executable as jupyter notebook




For these reasons, I always push for simple and straightforward workflows and many commits and merges. For many people git remains a mistery also after years working on it. I blame the easy-to-use guis, many people learn 2 buttons to press for a workflow, and they never care learning more



It is partially. But if someone uses excel professionally their requirements are pretty high. Tbf, I don’t know the details because I never use it, but calc is behind excel for professional use as far as I understand


Depends on the task. It is much better for ml, ai, scientific computing and high performance computing in general, developers…

But I use libreoffice only for cover letters and cvs.

If excel is needed, Linux is a problem


I don’t see anything wrong in this picture. ML == funny!

“AI Casino” vs “old school nights banging heads on books”

Edit. Guys, it was a joke, do I really need to put /s?


Never had issues in the past, I actually did the tests for few friends, just for fun. But most of the time they are overkill. Now that I have more experience I realize it takes few very basic questions to understand if one is technically fit for the job.

I don’t know if I would appreciate a complex test now if I was looking for a new position. It feels a bit disrespectful.

I currently struggle accepting all the psychological and hr tests for management positions… They are hr bs. I do them, but they are imho much worse than technical test, because completely useless and arbitrary. Those are really offensive and intrusive


It targets router firmwares though… These bot farms do not usually target real gnu/Linux os, because it is easier and more effective to attack router firmwares that are not well configured by producers and telcoms, and are practically never upgraded.

Therefore they are not a real threat for standard mint or popOS user… Let alone gentoo users

Edit. See https://en.m.wikipedia.org/wiki/Mirai_(malware)


I use python professionaly. Never seen a real successful supply chain attack on libraries used by “normal” people. There was recently a supply chain attack to pytorch, that I remember, but it was solved within few hours.

It is not a real risk for non developers. It is a risk, but veeery low, miles lower than pdf.exe.

Just check this stat for ransomwares taken as an example of viruses: https://www.statista.com/statistics/701020/major-operating-systems-targeted-by-ransomware/

Windows server is ~20% of server market. Still it is there second, with in practice no GNU/linux (80% of server market). This is why people do not really worry much, the risk exists, but it is minimal for well configured system compared to competition, even where competitors are a niche and Linux machines are the main target.

On windows, an antivirus is not a bad idea… On Linux, a firewall and basic care are usually sufficient


I agree with you, but, it is also true that the overwhelming majority of ransomwares affect windows https://www.statista.com/statistics/701020/major-operating-systems-targeted-by-ransomware/

Linux is not a significant target despite being so diffused

Edit. For those downvoting, windows server is ~20% of the server market and it is second in that stat. GNU/Linux distros such as rhel, debian and so on are almost 80% of server market and still there are no sufficient attacks reported to end up in that stat


Ok, than the experiment you are doing is just to check how many attacks you can get over a certain time… It is not really representative of a common use case. And again, this is not a virus. It is a successful attack from a bot on a purposely misconfigured service exposed to the internet. An antivirus is not needed. What is needed is basic configuration. An antivirus cannot help there


And disable password authentication as first step


Does the attack succeed? Never happened to me. You see bot trying, but really never seen succeeding irl. How is it configured?

Do you have also a rdp honeypot by chance? Do you see different rates of attack? Honestly curious.

I don’t have any windows licenses around, otherwise, it would have been an interesting test


Not at all. You leave a ssh port open, you don’t necessarily get a virus. Try it. Set up a raspberry pi, install ssh and leave the port open in your firewall. It is much less risky than exposing rdp (the most comparable windows protocol) on windows for instance.

It is a security risk, but absolutely not comparable of installing pdf.exe. Not even in the same league of risk.

As said, try it now and tell me how it goes.

There is a lot of misinformation around security on Linux


I have been using linux for almost 2 decades, never seen a virus. And I never heard of a colleague or friend who got one on Linux. That’s why no one has ever installed an antivirus, because, till now, the risk has been practically zero.

On windows, on the other hand, I saw so many viruses on friends and relatives computers…

People install antiviruses depending on the experience.

To be fair, we all know on Linux viruses exist, but is objectively pretty difficult to get one. It is not worth installing an antivirus if one doesn’t actively install garbage from untrusted sources


More than 5 years then. The comic was right, with the difference that it took more than 1 single team of researchers to solve it



Sure, there are several. But, for instance, Python is pretty much only sqlalchemy. All others are not really common.

At the end with a single framework one can use several backends. That is pretty convient


How many good orm do you have per language? 1? 2? Orm is practically locked once one chooses the language


Working in a data intensive context, I saw such migrations very often, from and to oracle, ms sql, postgres, sas, exasol, hadoop, parquet, Kafka. Abstraction, even further than orms, is extremely helpful.

Unfortunately in most real case scenarios companies don’t value abstraction, because it takes time that cannot be justified in PI plannings and reviews. So people write it as it is quicker, and migrations are complete re write. A lot of money, time and resources wasted to reinvent the wheel.

Truth is that who pays doesn’t care, otherwise they’d do it differently. They deserve the waste of money and resources.

On the other hand, now that I think of it, I’ve never seen a real impacting OS migration. Max os migration I’ve seen is from centos or suse to rhel… In the field I work on, non unix OSes are always a bad choice anyway


You miss the major reason of an orm, abstract vendor specif syntax, i.e. dialect and derived languages such as pl sql, t-sql, etc.

Orm are supposed to allow you to be vendor agnostic


I am a human, who happened to be browsing lemmy when you answered, and work in ML and with a background in algorithms and HPC. It happened to be a lucky coincidence


Because it cannot be mathematically developed. KPIs as class of algorithm are linear dimensional reductions from a complex hyperspace to a small, arbitrary reference system built on non orthogonal axes, aimed to capture non periodic, non stationary phenomena (i.e. that unpredictably evolve over time).

Mathematically, performance kpi do not make much sense for most jobs, unless the job is so straightforward that the hyperspace has such low complexity that KPIs are meaningful representation. Not even a call center job has such mathematical characteristics…

As a task, AGI is mathematically much simpler task.

However performance kpis are the only thing many have to judge, as they lack technical and personal skills to do otherwise. It’s a tradeoff, but we must recognize that kpi are oversimplifications with extreme loss of information, many time useless