I found this “no longer listed” items on eBay
During an annular eclipse this is true. But during the totality phase of a total solar eclipse, the entire sun is being blocked (UV doesn’t magically travel through the moon).
An annular eclipse is when the moon is the furthest away from Earth. A total eclipse is when the moon is close enough that the angular size of the moon is larger than the sun. So all light is blocked for a couple minutes. The few moments right before and right after totality are the most dangerous because most of the sun is covered and it doesn’t hurt the eyes, but can still be damaging.
OK mman, dont pop a vein over this
That’s incredibly rude. At no point was I angry or enraged. What you’re trying to do is minimize my criticism of your last comment by intentionally making it seem like I was unreasonably angry.
I was going to continue with you in a friendly manner, but screw you. You’re an ass (and also entirely wrong).
A lot of what you said is true.
Since the TPU is a matrix processor instead of a general purpose processor, it removes the memory access problem that slows down GPUs and CPUs and requires them to use more processing power.
Just no. Flat out no. Just so much wrong. How does the TPU process data? How does the data get there? It needs to be shuttled back and forth over the bus. Doing this for a 1080p image with of data several times a second is fine. An uncompressed 1080p image is about 8MB. Entirely manageable.
Edit: it’s not even 1080p, because the image would get resized to the input size. So again, 300x300x3 for the past model I could find.
/Edit
Look at this repo. You need to convert the models using the TFLite framework (Tensorflow Lite) which is designed for resource constrained edge devices. The max resolution for input size is 224x224x3. I would imagine it can’t handle anything larger.
https://github.com/jveitchmichaelis/edgetpu-yolo/tree/main/data
Now look at the official model zoo on the Google Coral website.
Not a single model is larger than 40MB. Whereas LLMs start at well over a big for even smaller (and inaccurate) models. The good ones start at about 4GB and I frequently run models at about 20GB. The size in parameters really makes a huge difference.
You likely/technically could run an LLM on a Coral, but you’re going to wait on the order of double-digit minutes for a basic response, of not way longer.
It’s just not going to happen.
when comparing apples to apples.
But this isn’t really easy to do, and impossible in some cases.
Historically, Nvidia has done better than AMD in gaming performance because there’s just so much game specific optimizations in the Nvidia drivers, whereas AMD didn’t.
On the other hand, AMD historically had better raw performance in scientific calculation tasks (pre-deeplearning trend).
Nvidia has had a stranglehold on the AI market entirely because of their CUDA dominance. But hopefully AMD has finally bucked that tend with their new ROCm release that is a drop-in replacement for CUDA (meaning you can just run CUDA compiled applications on AMD with no changes).
Also, AMD’s new MI300X AI processor is (supposedly) wiping the floor with Nvidia’s H100 cards. I say “supposedly” because I don’t have $50k USD to buy both cards and compare myself.
And you can add as many TPUs as you want to push it to whatever level you want
No you can’t. You’re going to be limited by the number of PCI lanes. But putting that aside, those Coral TPUs don’t have any memory. Which means for each operation you need to shuffle the relevant data over the bus to the device for processing, and then back and forth again. You’re going to be doing this thousands of times per second (likely much more) and I can tell you from personal experience that running AI like is painfully slow (if you can get it to even work that way in the first place).
You’re talking about the equivalent of buying hundreds of dollars of groceries, and then getting everything home 10km away by walking with whatever you can put in your pockets, and then doing multiple trips.
What you’re suggesting can’t work.
ATI cards (while pretty good) are always a step behind Nvidia.
Ok, you mean AMD. They bought ATI like 20 years ago now and that branding is long dead.
And AMD cards are hardly “a step behind” Nvidia. This is only true if you buy the 24GB top card of the series. Otherwise you’ll get comparable performance from AMD at a better value.
Plus, most distros have them working out of the box.
Unless you’re running a kernel <6.x then every distro will support AMD cards. And even then, you could always install the proprietary blobs from AMD and get full support on any distro. The kernel version only matters if you want to use the FOSS kernel drivers for the cards.
Two* GPUs? Is that a thing? How does that work on a desktop?
I’ve been using two GPUs in a desktop since 15 years ago. One AMD and one Nvidia (although not lately).
It really works just the same as a single GPU. The system doesn’t really care how many you have plugged in.
The only difference you have to care about is specifying which GPU you want a program to use.
For example, if you had multiple Nvidia GPUs you could specify which one to use from the command line with:
CUDA_VISIBLE_DEVICES=0
or the first two with:
CUDA_VISIBLE_DEVICES=0,1
Anyways, you get the idea. It’s a thing that people do and it’s fairly simple.
getting a few CUDA TPUs
Those aren’t “CUDA” anything. CUDA is a parallel processing framework by Nvidia and for Nvidia’s cards.
Also, those devices are only good for inferencing smaller models for things like object detection. They aren’t good for developing AI models (in the sense of training). And they can’t run LLMs. Maybe you can run a smaller model under 4B, but those aren’t exactly great for accuracy.
At best you could hope for is to run a very small instruct model trained on very specific data (like robotic actions) that doesn’t need accuracy in the sense of “knowledge accuracy”.
And completely forgot any kind of generative image stuff.
Are CUDAs something that I can select within pcpartpicker?
I’m not sure what they were trying to say, but there’s no such thing as “getting a couple of CUDA’s”.
CUDA is a framework that runs on Nvidia hardware. It’s the hardware that will have “CUDA cores” which are large amounts of low power processing units. AMD calls them “stream processors”.
I will never accept that CLI is an acceptable end-user implementation
This is a very terrible stance. Anytime you type something into a search engine it’s basically like a command-line. Computers used to only be terminals and users were just fine with it then.
Literally every OS (including Windows) has some things that can only be done in a command window. How about each having their appropriate uses and we use the best tool for a task?
There’s sometimes the odd little issue here and there with things like touchpads. The issue is that device manufacturers keep their device drivers closed sourced, and have zero interest in contributing to things like Linux. So it’s up to open source devs to develop their own drivers.
Sometimes there’s a bug or two, especially in things like laptops. If you’re using Ubuntu, you’re on an older kernel. The bug may have already been fixed but not made into Ubuntu yet.
I bet if you tried out something newer like OpenSUSE Tumbleweed or an Arch based (like EndeavourOS, I recommend it) you might find the issue gone.
I literally have a pinned tab for a Whisper implementation on github! It’s on definitely my radar to check out. My only concern is how well does it do things like multiple speakers and does it generate SDH subtitles? It’s the type that has those extra bits like “Suspenseful music” and “[groans]”, “[screams]”, etc. All the stuff someone hard of hearing would benefit from.
Every company is going to investigate a refund request to make sure they aren’t being extradited or ripped off
This is very true, and a good point. But the fact that my issues persisted after they had a complete outage with that API gives credence to my claim.
And all you’ve said in all these exchange’s is you asked/demanded for a refund
Categorically false. You yourself have said that I provided logs when prompted, regardless if they were incomplete.
I keep forgetting how entitled people are these days
Customers not receiving a service they paid for are absolutely entitled to a refund. That’s literally how commerce works.
And your attempt to shift the focus of my post to me being at fault just won’t fly.
If you don’t provide logs you aren’t following due process
What due process? Where are you getting this from?
demanding a refund
I never demanded a refund. Not once. You keep saying that despite being corrected over and over, which means you’re either dense or are mad for being shown you’re wrong and just need to win the argument. Or maybe you’re the dev themselves trying to be anonymous, who knows.
Good customer service involves giving a refund when asked, even when inconvenient.
Good customer service absolutely does not involve insulting a customer and deleting their account.
So the hill you’re dying on is yelling into the wind about something no one is talking about.
In the 3rd image they said “you are asking us to debug a third party application”, which I wasn’t. They also said “if we can’t help we can definitely do a refund”.
If I don’t give them the logs then they can’t help. If the logs I provided aren’t good enough then they can’t help. So then refund.
I really can’t figure out why you’re such a dog with a bone on this. Why choose this hill?
Honestly, what’s at stake for you here? What are you trying to accomplish? The point you’re pushing so hard is completely irrelevant. It’s like you’re trying to deflect from the core issue. I really wonder why.
No one was placing blame. No one is claiming they didn’t do their part. And regardless if manual calls worked, I wasn’t able to make use of the service.
The point of getting a refund is not even an issue here. It’s the rude and hostile response and deleting my account for a reasonable request.
They could have just said “no” and that would have been it. I would have been irked, and then gotten over it by the next day.
If it doesn’t work when your internet is out, then it’s not local.