I write code and play games and stuff. My old username from reddit and HN was already taken and I couldn’t think of anything else I wanted to be called so I just picked some random characters like this:

>>> import random
>>> ''.join([random.choice("abcdefghijklmnopqrstuvwxyz0123456789") for x in range(5)])
'e0qdk'

My avatar is a quick doodle made in KolourPaint. I might replace it later. Maybe.

日本語が少し分かるけど、下手です。

Alt: e0qdk@reddthat.com

  • 1 Post
  • 13 Comments
Joined 1Y ago
cake
Cake day: Sep 22, 2023

help-circle
rss

It’s not a GUI library, but Jupyter was pretty much made for the kind of mathematical/scientific exploratory programming you’re interested in doing. It’s not the right tool for making finished products, but is intended for creating lab notebooks that contain executable code snippets, formatted text, and visual output together. Given your background experience and the libraries you like, it seems like it’d be right up your alley.


Can Z3 account for lost bits? Did it come up with just one solution?

It gave me just one solution the way I asked for it. With additional constraints added to exclude the original solution, it also gives me a second solution – but the solution it produces is peculiar to my implementation and does not match your implementation. If you implemented exactly how the bits are supposed to end up in the result, you could probably find any other solutions that exist correctly, but I just did it in a quick and dirty way.

This is (with a little clean up) what my code looked like:

solver code
#!/usr/bin/env python3

import z3

rand1 = 0.38203435111790895
rand2 = 0.5012949781958014
rand3 = 0.5278898433316499
rand4 = 0.5114834443666041

def xoshiro128ss(a,b,c,d):
    t = 0xFFFFFFFF & (b << 9)
    r = 0xFFFFFFFF & (b * 5)
    r = 0xFFFFFFFF & ((r << 7 | r >> 25) * 9)
    c = 0xFFFFFFFF & (c ^ a)
    d = 0xFFFFFFFF & (d ^ b)
    b = 0xFFFFFFFF & (b ^ c)
    a = 0xFFFFFFFF & (a ^ d)
    c = 0xFFFFFFFF & (c ^ t)
    d = 0xFFFFFFFF & (d << 11 | d >> 21)
    return r, (a, b, c, d)

a,b,c,d = z3.BitVecs("a b c d", 64)
nodiv_rand1, state = xoshiro128ss(a,b,c,d)
nodiv_rand2, state = xoshiro128ss(*state)
nodiv_rand3, state = xoshiro128ss(*state)
nodiv_rand4, state = xoshiro128ss(*state)

z3.solve(a >= 0, b >= 0, c >= 0, d >= 0,
  nodiv_rand1 == int(rand1*4294967296),
  nodiv_rand2 == int(rand2*4294967296),
  nodiv_rand3 == int(rand3*4294967296),
  nodiv_rand4 == int(rand4*4294967296)
  )

I never heard about Z3

If you’re not familiar with SMT solvers, they are a useful tool to have in your toolbox. Here are some links that may be of interest:

Edit: Trying to fix formatting differences between kbin and lemmy
Edit 2: Spoiler tags and code blocks don’t seem to play well together. I’ve got it mostly working on Lemmy (where I’m guessing most people will see the comment), but I don’t think I can fix it on kbin.


If I understand the problem correctly, this is the solution:

solution

a = 2299200278
b = 2929959606
c = 2585800174
d = 3584110397

I solved it with Z3. Took less than a second of computer time, and about an hour of my time – mostly spent trying to remember how the heck to use Z3 and then a little time debugging my initial program.


What I’d do is set up a simple website that uses a little JavaScript to rewrite the date and time into the page and periodically refresh an image under/next to it. Size the image to fit the remaining free space of however you set up the iPad, and then you can stick anything you want there (pictures/reminder text/whatever) with your favorite image editor. Upload a new image to the server when you want to change the note. The idea with an image is that it’s just really easy to do and keeps the amount of effort to redo layout to a minimum – just drag stuff around in your image editor and you’ll know it’ll all fit as expected as long as you don’t change the resolution (instead of needing to muck around with CSS and maybe breaking something if you can’t see the device to check that it displays correctly).

There’s a couple issues to watch out for – e.g. what happens if the internet connection/server goes down, screen burn-in, keeping the browser from being closed/switched to another page, keeping it powered, etc. that might or might not matter depending on your particular circumstances. If you need to fix all that for your circumstances, it might be more trouble than just buying something purpose built… but getting a first pass DIY version working is trivial if you’re comfortable hosting a website.

Edit: If some sample code that you can use as a starting point would be helpful, let me know.


Any ways to get around the download failing

I did this incredibly stupid procedure with Firefox yesterday as a workaround for a failing Google Takeout download:

  • backup the .part file from the failed download
  • restart the download (careful – if you didn’t move/back it up, it will be deleted and you will have to download the whole thing again; found this out the hard way on a 50GB+ file… that failed again)
  • immediately pause the new download after it starts writing to disk
  • replace the new .part file with the old .part file from earlier (or – see [1] below)
  • Firefox might not show progress for a long time, but will eventually continue the download (I saw it reading the file back from disk with iotop so I just let it run)
  • sanity check that you actually got the whole thing and that it is usable (in my case, I knew a hash for the file)

[1] You can actually replace the new .part file with anything that has the same size in bytes as the old file – I replaced it with a file full of zeros and manually merged the end onto the original .part file with a tiny custom python script since I had already moved the incomplete file to other media before realizing I could try this. (In my case, the incomplete file would still have been useful even with the last ~1MB cut off.)

There are probably better options in most cases – like Thunderbird for mailbox as other people suggested, or rclone for getting stuff from Drive – but if you need to get Takeout to work and the download keeps failing this may be another option to try.


Rule 9 from Agans’s Debugging: If you didn’t fix it, it ain’t fixed

Intermittent problems are the worst…


I’ve used Wireshark when I want to inspect the traffic going through my computer. I’ve found it particularly handy for debugging my own networking code. I’ve also used netstat to see active connections and programs listening for traffic when I don’t care about the packet contents specifically.


With that said, can I possibly only allow traffic to and fro from the proxy through my firewall?

Yes. That is what I suggested. If you configure the firewall to only allow traffic to/from the specific IP and port combination of your proxy, other traffic will be blocked.

I should be able to (in theory) inspect traffic too, although I don’t know how far that will take me.

You can do content filtering via a proxy like that, yes. A similar sort of configuration is used on school computers to do things like block adult content, with varying degrees of success. Some ad-blocking techniques work on similar principles.


If I understood your question correctly, you’d run the proxy application (which might be Squid or Apache or some other program) either on the host computer outside the VM or elsewhere on your network. (I’m not well versed on all the ins and outs of setting Firefox up to communicate through a proxy; I just know it can be done.) The proxy would listen for incoming traffic on a specific port you configure. You then tell Firefox (in its network settings) to communicate with the specific IP and port of the proxy instead of talking to web servers directly.

To prevent other programs from communicating, you’d firewall off the VM with iptables (or maybe ufw or something else depending on what you use on your system). You’d set it to drop all traffic going to/from the VM’s network except packets going to or coming from the specific IP/port combinations you want to allow.

This isn’t a bulletproof way to block other apps from talking to the internet – anything that knows about the proxy (or which can hijack/manipulate a program like Firefox that you’ve told about the proxy) could communicate with web servers via the proxy, but depending on your specific concerns it may be good enough.


You could try configuring Firefox to access the internet through a proxy and then block the VM off from everything except the proxy and your network mount with a firewall (outside the VM).


It’s not clear to me either on exactly what hardware is required for the reference implementation, but there’s a bunch of discussion about getting it to work with llama.cpp in the HN thread, so it might be possible soon (or maybe already is?) to run it on the CPU if you’re willing to wait longer for it to process.

Let us know how it goes!



Description from the site: ``` Mistral AI team is proud to release Mistral 7B, the most powerful language model for its size to date. Mistral 7B in short Mistral 7B is a 7.3B parameter model that: Outperforms Llama 2 13B on all benchmarks Outperforms Llama 1 34B on many benchmarks Approaches CodeLlama 7B performance on code, while remaining good at English tasks Uses Grouped-query attention (GQA) for faster inference Uses Sliding Window Attention (SWA) to handle longer sequences at smaller cost We’re releasing Mistral 7B under the Apache 2.0 license, it can be used without restrictions. Download it and use it anywhere (including locally) with our reference implementation Deploy it on any cloud (AWS/GCP/Azure), using vLLM inference server and skypilot Use it on HuggingFace Mistral 7B is easy to fine-tune on any task. As a demonstration, we’re providing a model fine-tuned for chat, which outperforms Llama 2 13B chat. ```
fedilink

Adding a summary to this: the article has some history about OpenOffice (which is a zombie project that was essentially replaced by LibreOffice after Oracle bought Sun) followed by a description of patterns of weird commit history recently (e.g. regular changes that are entirely or almost entirely just fiddling with whitespace), and a request to email the Apache foundation to ask them to make it clear that the project is dead.