I started tinkering with frigate and saw the option to use a coral ai device to process the video feeds for object recognition.
So, I started checking a bit more what else could be done with the device, and everything listed in the site is related to human recognition (poses, faces, parts) or voice recognition.
In some part I read stable diffusion or LLMs are not an option since they require a lot of ram which these kind of devices lack.
What other good/interesting uses can these devices have? What are some of your deployed services using these devices for?
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.
Rules:
Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.
No spam posting.
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it’s not obvious why your post topic revolves around selfhosting, please include details to make it clear.
Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
No trolling.
Resources:
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
They are generally used for speech recognition and image classification, sometimes in a BAD way, like face recognition in surveillance cameras.
I mean that’s not inherently bad, what you do with that data could be though.
Yeah, they are mostly designed for classification and inference tasks; given a piece of input data, decide which of these categories it belongs to - the sort of things you are going to want to do in near real time, where it isn’t really practical to ship off to a data centre somewhere for processing.
I started using Frigate and thought about going the Coral route but realized you didn’t need them if you have a relatively recent Intel CPU (6th gen or newer) as OpenVino with the iGPU is pretty much on par https://github.com/blakeblackshear/frigate/discussions/5742 .
A lot of the newer SBCs are being shipped with integrated NPUs/TPUs now as well. I would get a Coral if I were to use an older SBC or RPi or older PC as a camera server for object detection. Currently I have an ESP32-CAM watching a bird feeder but that feed goes to a modern server for bird species recognition but I could see a Coral as an option.
Can you tell me more about your bird recognition setup? I currently have a feeder with a PiCam on it that records based on movement (just using RPi_Cam_Web_Interface) but would love to do something like that!
Image recognition, speech2txt, txt2speech, classification and such smaller models. They are fast but have no memory worth mentioning and are heavily dependent on data access speed. Afaik, transformer based models are hugely memory bound and may not be a good match if run on these externally via Usb3.
They’re a great use for that otherwise useless “Wi-Fi slot” on a wired machine. Not too expensive either. So if you’re using your iGPU to transcode videos, it won’t interfere with your Frigate or Immich workload. And they’re supposed to be energy efficient too.