ollama run x/z-image-turbo
Only available in Mac Silicon and Linux with Cuda, and apparently more models are coming soon such as GLM-Image, Qwen-Image-2512, Qwen-Image-Edit-2511... #LLM #ML #AI github.com/ollama/ollama/relea…
Releases · ollama/ollama
Get up and running with OpenAI gpt-oss, DeepSeek-R1, Gemma 3 and other models. - ollama/ollamaGitHub
I rode passenger today on a patrol watching for ICE in my neighborhood in Minneapolis.
Our ride was mostly uneventful, the neighborhood we patrolled has been a target but just not this afternoon.
It took me a while to follow everything that is going on, and I felt rather incompetent even as a passenger. These communities are rapidly developing processes and their own kind of professional standards even as new people are constantly joining in. There is a large set of Signal groups covering different portions of the city and into the suburbs, as well as many subdivisions for reactive response. There's a schedule of dispatchers who run calls, formal handoff, other support people to take notes and follow the chat. Drivers are trying to spot ICE, peering into the tinted windows (so many people have tinted windows!), looking up license plates.
There's a protocol that I don't yet understand for what to do when you encounter an ICE vehicle. Several times a day I hear the caravans of ICE and observers honking as they go down one of the streets by my house; protocol is only to do that after a direct encounter and ICE officers leaving their vehicle. I'm not sure what that implies in terms of numbers.
Throughout the neighborhood many corners had people in hi-viz jackets on guard. It was around the time kids were coming home from school. These are being organized separately, by schools, community organizations, churches, and the many ad hoc groups that are popping up block by block.
This is all heartening, and impressive, and also sad because it's not nearly enough. People are doing their best, but their best can only slow down ICE. We can't solve this from here.
reshared this
If you want a bunch of random noise and chaos in your life, there's a silly thing I put together, called FX Radio.
It's a box running Liquidsoap that mixes several audio players going through a huge library of sound effects, production libraries, and other odd things.
Don't expect anything to make sense. If it does, it's purely coincidental.
This is running on the same machine that hosts @NoiseBox, which throws a random sound at the fediverse once an hour, at a random minute each hour.
Fun fact:
This is running on a shelf under my mom's desk. While she knows the box is there, she doesn't know what it does. So, it's fun just for that reason.
Onj 🎶 reshared this.
> Elasticsearch was never a database. It was built as a search engine API over Apache Lucene (an incredibly powerful full-text search library), but not as a system of record. Even Elastic’s own guidance has long suggested that your source of truth should live somewhere else, with Elasticsearch serving as a secondary index. Yet, over the last decade, many teams have tried to stretch the search engine into being their primary database, usually with unexpected results.
We demanded to keep our normal logs but you know how corporate IT is ...
1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, And 2943 more… group
ICE on Bluesky: bsky.app/profile/icegov.bsky.s…
Verified.
Why not keep the frontend normalizer in Python you ask? This good for now, but to ship recompilable and buildable DLLs, it's clear that we'll have to do this, not to mention when NVDA goes 64-bit, although for the latter the issue is more embedding Espeak than the frontend need itself
GitHub - tgeczy/NVSpeechPlayer: A Klatt-based speech synthesis engine written in c++
A Klatt-based speech synthesis engine written in c++ - tgeczy/NVSpeechPlayerGitHub
In "I Still Hate WiFi" news:
In the house where I'm staying now, there are three wireless access points in strategic places in the house. Unfortunately, the linking isn't as optimal as I'd like it to be.
I've been primarily using a remote computer from my iPhone for nearly three weeks, so when there is any lag, it becomes pretty obvious.
Pretty much every night here, the lag increases to an annoying degree, so much so that I get better results by turning off WiFi and using mobile data (currently AT&T is working the best in the house).
Because I'm using Tailscale, I still get access to all the same resources whether I'm on WiFi or not, and it only takes a second or two to reestablish the connection when the provider is switched, even if I change between my primary and secondary eSIM for data.
This place is so congested, though. If I do an environment scan from my UniFi controller, I see about 85 wireless access points in the area.
It's not that bad in my New York City neighborhood.
I've confirmed that latency and jitter are perfectly fine on wired devices here in the house. Even with three wireless access points spread out as much as possible as far as spectrum goes, things still get stupid, especially in the upstream direction.
Anyway, I hate WiFi. The end.
In case anyone cares, here's the latency and jitter I get when pinging my iPhone on AT&T from one of my home computers in New York, about 500 miles away.
36 packets transmitted, 36 received, 0% packet loss, time 73ms
rtt min/avg/max/mdev = 40.558/171.173/370.040/104.540 ms
On WiFi, it was more like min 39ms, max 550 ms, average 100 something ms. I've seen better while on AT&T.
Ehhh, whatever.
Winter blue tardis
in reply to Tamas G • • •