#Ollama v0.14.1 has Experimental image generation models.
ollama run x/z-image-turbo
Only available in Mac Silicon and Linux with Cuda, and apparently more models are coming soon such as GLM-Image, Qwen-Image-2512, Qwen-Image-Edit-2511... #LLM #ML #AI github.com/ollama/ollama/relea…
#AI #ML #llm #ollama

I rode passenger today on a patrol watching for ICE in my neighborhood in Minneapolis.

Our ride was mostly uneventful, the neighborhood we patrolled has been a target but just not this afternoon.
It took me a while to follow everything that is going on, and I felt rather incompetent even as a passenger. These communities are rapidly developing processes and their own kind of professional standards even as new people are constantly joining in. There is a large set of Signal groups covering different portions of the city and into the suburbs, as well as many subdivisions for reactive response. There's a schedule of dispatchers who run calls, formal handoff, other support people to take notes and follow the chat. Drivers are trying to spot ICE, peering into the tinted windows (so many people have tinted windows!), looking up license plates.

There's a protocol that I don't yet understand for what to do when you encounter an ICE vehicle. Several times a day I hear the caravans of ICE and observers honking as they go down one of the streets by my house; protocol is only to do that after a direct encounter and ICE officers leaving their vehicle. I'm not sure what that implies in terms of numbers.

Throughout the neighborhood many corners had people in hi-viz jackets on guard. It was around the time kids were coming home from school. These are being organized separately, by schools, community organizations, churches, and the many ad hoc groups that are popping up block by block.

This is all heartening, and impressive, and also sad because it's not nearly enough. People are doing their best, but their best can only slow down ICE. We can't solve this from here.

This entry was edited (3 days ago)

reshared this

If you want a bunch of random noise and chaos in your life, there's a silly thing I put together, called FX Radio.

It's a box running Liquidsoap that mixes several audio players going through a huge library of sound effects, production libraries, and other odd things.

Don't expect anything to make sense. If it does, it's purely coincidental.

stream.borris.me:8888/fx

This is running on the same machine that hosts @NoiseBox, which throws a random sound at the fediverse once an hour, at a random minute each hour.

Fun fact:
This is running on a shelf under my mom's desk. While she knows the box is there, she doesn't know what it does. So, it's fun just for that reason.

Onj 🎶 reshared this.

@Bri New silly bug! Send a new post. In the invisible interface, navigate to that new post. Before anything else comes in, delete that post. Now you're apparently still on that post in the buffer even though it doesn't exist! What's more, you can keep hitting next post, next post, next post, in the invisible interface, and it's entirely silent. Your position in the buffer keeps going up and up and up, E.G. Home. 171 of 162. To recover, hit previous post enough to get back to the real last post, then if you hit next post, it correctly recognizes that you're at the end and prevents you from going off into never-never land again.

> Elasticsearch was never a database. It was built as a search engine API over Apache Lucene (an incredibly powerful full-text search library), but not as a system of record. Even Elastic’s own guidance has long suggested that your source of truth should live somewhere else, with Elasticsearch serving as a secondary index. Yet, over the last decade, many teams have tried to stretch the search engine into being their primary database, usually with unexpected results.

We demanded to keep our normal logs but you know how corporate IT is ...

Long post on the durability of US racism

Sensitive content

in reply to Tim Bray

What would happen if ICE wanted to join the Fediverse? Most instances probably wouldn’t host that account, but some would. The account would be broadly blocked at both individual and instance levels, and some instances would probably defederate the hosting instance. Some folks would migrate between instances based on these choices. All this, I think, would be a good outcome?
This entry was edited (6 hours ago)

Yo, Bulgarian is not an easy language, yet here we are coding it in. Whoever can help tune it, well, it may need some. Good news with it is that it's largely a "what you hear is what you get" kind of language, and with 30 letters, well, seen worse. Bad news? Cyrillic support is hard to code out for the IPA normalizer. Good news? We need Cyrillic support anyway before turning the normalizer into a frontend DLL, which is the next milestone of this project.
Why not keep the frontend normalizer in Python you ask? This good for now, but to ship recompilable and buildable DLLs, it's clear that we'll have to do this, not to mention when NVDA goes 64-bit, although for the latter the issue is more embedding Espeak than the frontend need itself
This entry was edited (2 hours ago)

NV Speech Player (github.com/tgeczy/NVSpeechPlay…): Updated readme with section that describes how phoneme data is added or changed, as asked by some. Feel free to pull the repo, modify data.py and open a PR, or drop me an updated set of lines if you'd like, and they will get tuned accordingly.

In "I Still Hate WiFi" news:

In the house where I'm staying now, there are three wireless access points in strategic places in the house. Unfortunately, the linking isn't as optimal as I'd like it to be.

I've been primarily using a remote computer from my iPhone for nearly three weeks, so when there is any lag, it becomes pretty obvious.

Pretty much every night here, the lag increases to an annoying degree, so much so that I get better results by turning off WiFi and using mobile data (currently AT&T is working the best in the house).
Because I'm using Tailscale, I still get access to all the same resources whether I'm on WiFi or not, and it only takes a second or two to reestablish the connection when the provider is switched, even if I change between my primary and secondary eSIM for data.

This place is so congested, though. If I do an environment scan from my UniFi controller, I see about 85 wireless access points in the area.
It's not that bad in my New York City neighborhood.

I've confirmed that latency and jitter are perfectly fine on wired devices here in the house. Even with three wireless access points spread out as much as possible as far as spectrum goes, things still get stupid, especially in the upstream direction.

Anyway, I hate WiFi. The end.

In case anyone cares, here's the latency and jitter I get when pinging my iPhone on AT&T from one of my home computers in New York, about 500 miles away.

36 packets transmitted, 36 received, 0% packet loss, time 73ms
rtt min/avg/max/mdev = 40.558/171.173/370.040/104.540 ms

On WiFi, it was more like min 39ms, max 550 ms, average 100 something ms. I've seen better while on AT&T.

Ehhh, whatever.

This entry was edited (3 hours ago)
in reply to John Dowling, Jr.

@jmd2000 well, it technically last got some code changes in 2021, picked it up again a few days ago because even though ESpeak did eventually integrate Speech Player it just never quite sounded as good as the standalone version that just borrowed it for phonemes. So I'm super happy to have it back, for me it's closest so far to an Eloquence replacement.