• 2 Posts
  • 7 Comments
Joined 1Y ago
cake
Cake day: Jun 27, 2023

help-circle
rss

Time zones are an endless source of frustration, this one doesn’t sound too bad though:

Going forward, all timestamps in the API are switching from timestamps without time zone (2023-09-27T12:29:59.113132) to ISO8601 timestamps (e.g. 2023-10-29T15:10:51.557399+01:00 or Z suffix). In order to be compatible with both 0.18 and 0.19, parse the timestamp as ISO8601 and add a Z suffix if it fails (for older versions).

https://github.com/LemmyNet/lemmy/pull/3496


https://programming.dev/post/3666732 Change log for upcoming Lemmy version 0.19.0 I am just reposting this from the original post: https://lemmy.ml/post/5711722. It’s interesting to see this for the software we’re all using and it makes me want to learn a bit more about the architecture. Quite a few user-facing features and some backend improvements. For example: > Outgoing Federation Queue The federation queue has been rewritten to be much more performant and reliable. This is irrelevant for client developers, but admins should look out for potential federation problems. If you run multiple Lemmy backends for horizontal scaling, be sure to read the updated documentation and set the new configuration parameters. The Troubleshooting section has information about how to find out the state of the federation queues. > https://github.com/LemmyNet/lemmy/pull/3605
fedilink

This data structure uses a 2-dimensional array to store data, documented in this scala implementation: https://github.com/twitter/algebird/blob/develop/algebird-core/src/main/scala/com/twitter/algebird/CountMinSketch.scala. I’m still trying to understand it as well.

Similar to your idea, I had thought that by using k bloom filters, each with their own hash function and bit array, one could store an approximate count up to k for each key, which also might be wasteful or a naïve solution.

PDF link: http://www.eecs.harvard.edu/~michaelm/CS222/countmin.pdf


I haven’t used them in Spark directly but here’s how they are used for computing sparse joins in a similar data processing framework:

Let’s say you want to join some data “tables” A and B. When B has many more unique keys than are present in A, computing “A inner join B” would require lots of shuffling if B, including those extra keys.

Knowing this, you can add a step before the join to compute a bloom filter of the keys in A, then apply the filter to B. Now the join from A to B-filtered only considers relevant keys from B, hopefully now with much less total computation than the original join.


Collage sounds really interesting , will check it out. Another variation on bloom filter I recently learned about is count-min-sketch. It allows for storing/incrementing a count along with each key, and can answer “probably in set with count greater than _”, “definitely not in set”.

Thanks for adding more detail on the DB use-cases!


Bloom filters: real-world applications
What are your real-world applications of this versatile data structure? They are useful for optimization in databases like sqlite and query engines like apache spark. Application developers can use them as concise representations of user data for filtering previously seen items. The linked site gives a short introduction to bloom filters along with some links to further reading: > A Bloom filter is a data structure designed to tell you, rapidly and memory-efficiently, whether an element is present in a set. > The price paid for this efficiency is that a Bloom filter is a probabilistic data structure: it tells us that the element either definitely is not in the set or may be in the set.
fedilink

Although your current role wouldn’t seem very senior at a large organizational, “senior“ is a relative term and at this company it seems like you are the engineer with ownership responsibilities over the end-to-end software development of a production system. So it might still be reasonable to use a senior title if there are other benefits


It’s probably not going to work as a defense against training LLMs (unless everyone does it?) but it also doesn’t have to — it’s an interesting thought experiment which can aid in understanding of this technology from an outside perspective.


I agree with how you characterized it and the term “ai engineer” didn’t resonate with me as defined by the author. If such an engineer doesn’t need to know about the data involved (“nor do they know the difference between a Data Lake or Data Warehouse”) then I don’t think they will be able to ship an AI/ML product based on data.

New titles can be helpful for sorting out different roles with some shared skillsets such as the distinction which emerged between Data Scientist and ML Engineer at some companies to focus the latter on shipping production software using ML.