• 0 Posts
  • 11 Comments
Joined 1Y ago
cake
Cake day: Sep 27, 2023

help-circle
rss

That’s pretty much what I do, yeah. On my computer or phone, I split an epub into individual text files for each chapter using pandoc (or similar tools). Then after I read each chapter, I upload it into my summarizer, and perhaps ask some pointed questions.

It’s important to use a tool that stays confined to the context of the provided file. My first test when trying such a tool is to ask it a general-knowledge question that’s not related to the file. The correct answer is something along the lines of “the text does not provide that information”, not an answer that it pulled out of thin air (whether it’s correct or not).


I get that, and it’s good to be cautious. You certainly need to be careful with what you take from it. For my use cases, I don’t rely on “reasoning” or “knowledge” in the LLM, because they’re very bad at that. But they’re very good at processing grammar and syntax and they have excellent vocabularies.

Instead of thinking of it as a person, I think of it as the world’s greatest rubber duck.


It’s as open as most Android brands. I don’t use any of Boox’s services or apps. I installed F-Droid and use open-source apps from there. I use Librera as my ebook reader, with Syncthing to sync my book library between my desktop, ereader, and phone. It’s possible to set up the Play Store but I don’t bother, personally.

It’s not a 100% smooth experience but I’m very happy with the F-Droid compatibility. I absolutely refuse to get locked into a walled garden.


I’ve done this to give myself something akin to Cliff’s Notes, to review each chapter after I read it. I find it extremely useful, particularly for more difficult reads. Reading philosophy texts that were written a hundred years ago and haphazardly translated 75 years ago can be a challenge.

That said, I have not tried to build this directly into my ereader and I haven’t used Boox’s specific service. But the concept has clear and tested value.

I would be interested to see how it summarizes historical texts about these topics. I don’t need facts (much less opinions) baked into the LLM. Facts should come from the user-provided source material alone. Anything else would severely hamper its usefulness.


Related feature on my wish list: I’d love a way to basically fork a feed based on regex pattern matching. This would be useful for some premium feeds that lump multiple podcasts together. For example, one of my Patreon feeds includes three shows: the ad-free main feed, the first-tier weekly premium feed, and the second-tier monthly premium feed.

I don’t want to filter them out because I DO want to listen to all of them, but for organizational purposes I don’t want them lumped together. I’d prefer to display these as two or three separate podcasts in my display.

Another example is the Maximum Fun premium BoCo feed. They include the bonus content for ALL their shows (which is…a lot) in a single feed. I only listen to about half a dozen, and even that is a bit of a mess in one feed!


Great points, thanks.

Can you clarify what you mean by “local decryption”? I thought Proton and Tuta work pretty much the same way, but perhaps there’s a distinction I’m missing.

One thing I like about Tuta is that it has the option to cache your messages in localstorage in your browser so you can do full-text search. FWIW, I think Proton added a similar feature recently, though I have not tried it. I imagine neither would work very well with large mailboxes; probably better to configure a real email client.


Do they offer cloud storage now? From what I can see on their web site, it’s 500GB…just for email. I mean sure, that’s cool, but it would take me several lifetimes to accumulate 500GB of email so it’s not much of a selling point to me.

It’s a good email service, anyway. I’ve been using the free tier for a few years. Similar to Proton, and in theory Tuta is more private because they encrypt the headers as well as the message body.


I posted some of my experience with Kagi’s LLM features a few months ago here: https://literature.cafe/comment/6674957 . TL;DR: the summarizer and document discussion is fantastic, because it does not hallucinate. The search integration is as good as anyone else’s, but still nothing to write home about.

The Kagi assistant isn’t new, by the way; I’ve been using it for almost a year now. It’s now out of beta and has an improved UI, but the core functionality seems mostly the same.

As far as actual search goes, I don’t find it especially useful. It’s better than Bing Chat or whatever they call it now because it hallucinates less, but the core concept still needs work. It basically takes a few search results and feeds them into the LLM for a summary. That’s not useless, but it’s certainly not a game-changer. I typically want to check its references anyway, so it doesn’t really save me time in practice.

Kagi’s search is primarily not LLM-based and I still find the results and features to be worth the price, after being increasingly frustrated with Google’s decay in recent years. I subscribed to the “Ultimate” Kagi plan specifically because I wanted access to all the premium language models, since subscribing to either ChatGPT or Claude would cost about the same as Kagi, while Kagi gives me access to both (plus Mistral and Gemini). So if you’re interested in playing around with the latest premium models, I still think Kagi’s Ultimate plan is a good deal.

That said, I’ve been disappointed with the development of LLMs this year across the board, and I’m not convinced any of them are worth the money at this point. This isn’t so much a problem with Kagi as it is with all the LLM vendors. The models have gotten significantly worse for my use cases compared to last year, and I don’t quite understand why; I guess they are optimizing for benchmarks that simply don’t align with my needs. I had great success getting zsh or Python one-liners last year, for example, whereas now it always seems to give me wrong or incomplete answers.

My biggest piece of advice when dealing with any LLM-based tools, including Kagi’s, is: don’t use it for anything you’re not able to validate and correct on your own. It’s just a time-saver, not a substitute for your own skills and knowledge.


A non-smartphone, that is, a cell phone like the ones that today’s parents had when we were young and with which we made calls and sent text messages, was enough for us, and it did not cause addiction.

That’s not the way I remember it. Texting addiction was a thing. That’s how Twitter became popular; it was basically a way to broadcast SMS to friends at first.

I guess it’s a matter of degrees.

Ad-based services are the real problem here, I think. You don’t hear people complaining about Wikipedia addiction.


Thank you for saving me the trouble of investigating this as an option.

No reason to tolerate proprietary licenses when there are so many viable FLOSS solutions out there.


I feel this.

Back in the 90s, there was a fantastic paint program for Mac called ColorIt! (The exclamation point is part of the name, though this is the last time I will respect that because it’s obnoxious; lookin’ at you, Yahoo!*)

It was a commercial product, but ColorIt 2.3 was eventually released as freeware after newer major versions were released for sale. 2.3 was everything I needed, and while I did try ColorIt 4.0, it didn’t click with me the way 2.3 did. At the time I felt like they bowed to the pressure of Adobe’s success and instead of playing to their unique strengths, they made ColorIt’s UI a bit too much like Photoshop. So I stuck with version 2.3.

By the time Mac OS X came around, ColorIt was no longer in active development. But OS X had the “Classic” environment, something akin to an OS 9 VM tightly integrated into OS X. Classic apps didn’t look or feel like native OS X apps, and running Classic came with a heavy RAM burden. But I did it anyway, because ColorIt 2.3 was da bomb.

I continued using ColorIt 2.3 up until Apple killed support for Classic in 10.6 Snow Leopard.

At that point, the intrepid developers came out of hiding and created a Carbon port of ColorIt 4.5 that could run natively on OS X. It was Carbon-only, which meant that it it didn’t run natively on Intel Macs, but it did run thanks to Apple’s Rosetta compatibility layer — at least until Apple axed that as well.

If I ever get into pixel art again, I’ll probably run ColorIt 2.3 again in an OS 9 VM with Sheepshaver or whatever works best nowadays.

*That exclamation point is strictly to emphasize my disdain for Yahoo.