GNOME Shell running out of the box on postmarketOS 23.06
Kudos to pabloyoyoista; GNOME and postmarketOS contributor / liaison ↔
He's made the GNOME experience as upstream as possible. We've discussed further improvements.
BTW, he's been looking for help https://blogs.gnome.org/pabloyoyoista/2023/03/05/gs-and-pmos-a-bumpy-road/
#postmarketOS #GNOME #LinuxMobile
Devin Prater :blind: reshared this.
A swan on what looks like a collision course – taken as I was standing, apparently in a good spot for flying over, on a hill next to the lake it took off from. It was gaining altitude faster than it was closing the distance, so this didn't feel quite as perilous as it might appear!
#birding #birds #birdwatching #birdphotography #birdsofmastodon #photography #wildlife #wildlifephotography #naturephotography
libei 1.0.0 is out! Finally!
https://gitlab.freedesktop.org/libinput/libei/-/releases/1.0.0
Almost 3 years of development (admittedly with pauses), libei merged from a "hey, let's wrap emulating input into a library" idea into what's basically a point-to-point transport layer for logical input events. With the pending XDG Desktop Portal and compositor changes we'll be able to emulate and capture input in Wayland environments. And do much more, probably, there's already interest for accessibility implementations.
libei 1.0.0 · libinput / libei · GitLab
libei 1.0.0 is now available. Note that there is one minor protocol change (ei_connection.sync) and one ABI (not API) change in libeis since the RC2 despite...GitLab
When making adaptive apps for GNOME, it's important that you don't think of smaller sizes as being a mobile mode. You still need to account for desktop users that simply want to use a smaller window - like if they're using a small laptop.
Be mindful that all your UI elements are still accessible with the pointer and via keyboard navigation.
OK so for those of you who are like me from the past and don’t listen to conferences, tech talks ETC but are actually interested in #programming I want to show you one feature of #Swift and/or #SwiftUI which actually blew my mind because of its simplicity. Before the #iOS 17 era you would do something like
```
class User: ObservableObject {
@Published var firstName = "Nuno"
@Published var lastName = "Mao"
}
//…
struct ContentView: View {
@StateObject var user = User()
var body: some View {
Text("I am \(user.firstName) \(user.lastName)")
}
}
```
Then, in subsequent views you could pass this class around by using `@ObservedObject` property wrapper so that all views could stay in sync and point to the same data. However now they made it to work like that:
```
@Observable
class User {
var firstName = "Nuno"
var lastName = "Mao"
private var eMailAddress = "nuno69a@gmail.com"
}
…
struct ContentView: View {
var user = User()
var body: some View {
Text("I am \(user.firstName) \(user.lastName)")
}
}
```
This code brings some interesting implications with it, the least of it being that we no longer need to mark our properties with the `@Published` property wrapper. Basically all publicly exposed properties are marked as `@Published` automatically without our further intervention. `private` properties stay untouched, like in the example above.
The problem starts when we want to make our data actually changable like so
```
TextField("Name", text: $user.firstName)
TextField("Last name", text: $user.lastName)
```
Funnily enough, it won’t actually work because “Cannot find '$user' in scope”. Or, maybe? They made it actually very clever, to minimize overhead of code generated at compile they they once again force us to use a property wrapper. One small change and our properties can be bound to stuff
```
struct ContentView: View {
@Bindable var user = User()
…
```
That’s all, stufff will now work as expected.
#programming #apple #iOS #macOS #WWDC
Devin Prater :blind: reshared this.
Managing model data in your app | Apple Developer Documentation
Create connections between your app’s data model and views.Apple Developer Documentation
This piece captures so much of my own thinking about generative AI in the context of my writing classes. I think there are productive ways of using the technology, but as a cognitive scientist, I’m very skeptical about the claims I’ve seen that eliminating “busywork” will open up space for students to do higher level work. To be clear, I do think that it could, perhaps most obviously for assignments that require writing but aren’t about learning to write.
1/
How ChatGPT robs students of motivation to write and think for themselves
People who have used AI to help with writing report a loss of pride and ownership in what they produce.The Conversation
The only way to build skills is to do the work, to practice, and unfortunately “practice” often feels like “busywork” to students if we instructors aren’t thoughtful about creating meaningful arenas for practice.
3/
So that is what we need to be doing as instructors: designing opportunities for students to practice skill-building in ways that do NOT feel like the kind of busywork best avoided by turning to ChatGPT. I was recently re-reading James Lang’s “Cheating Lessons” book, and though it was written a decade ago and as such doesn’t say anything about generative AI, I actually think it’s a GREAT reference for how to create these kinds of assignments.
4/
So I restarted my Discord client and got notified that I can switch to a proper username without the discriminator (the numbers after the username). I was able to get my prefered username and within about 2 hours, I have started getting harassed by a guy that wanted it. His account was created 2 months ago, mine was created September 2015.
So yeah, this whole Discord username switch is going to be a complete clusterfuck, be prepared!
Tech speakers, it's 2023. Stop using moms as your example of a non-technical audience. It's wrong, its not funny, and whatever you were saying, now most of your audience is not thinking about it.
Just use the exec team as an example instead and get on with your life.
reshared this
And dudes, if you think this is a small thing (it's not), then it should be a small thing for you to fix your shit. If you'd like to not have to deal with people being annoying when you do it, you can literally have that.
Everyone else, if you're in a position to do so without it being a risk for you, tell the speaker and the conference, every single time.
Also? Marketing? HR? Largely female-coded work, so you're doing the same thing, but now also showing that you don't really respect your colleagues. And yes, they are actually your colleagues, and they do notice. Punch up. The CEO will laugh too, and he can take it.
If he can't, you really want to change jobs anyway.
It's weird to me that Apple is sunsetting Macs made as recently as 2017. If you are buying a new Mac to stay on the latest and greatest software, please don't landfill your current Mac. Donate it to a FreeGeek or similar organization. An "unsupported Mac" can continue as a valued computer running Linux for many more years beyond "the sunset".
In honor of #WWDC23 I updated this bug demonstrating Safari 16.5 on iOS and macOS breaks tables to unusable when CSS display properties are applied:
https://bugs.webkit.org/show_bug.cgi?id=257458
I have been tracking Apple’s ongoing failure to fix this #a11y bug for 5 years:
https://adrianroselli.com/2022/07/its-mid-2022-and-browsers-mostly-safari-still-break-accessibility-via-display-properties.html
It’s Mid-2022 and Browsers (Mostly Safari) Still Break Accessibility via Display Properties
It was late 2020 when I last tested how browsers use CSS display properties to break the semantics of elements.Adrian Roselli
So, I've been working on this report on commercial self-generated CSAM for a few months. This is an area that is not very well studied, and also poorly handled by many platforms. While Instagram is the focal point of the report, this affects a very wide swath of the tech industry, from payment services to e-commerce to social networking tools. Since some news sites are attributing every conclusion they've drawn to "the researchers", please read the actual report:
https://stacks.stanford.edu/file/druid:jd797tp7663/20230606-sio-sg-csam-report.pdf
With grim determination, Earth threw up a bubble, wrapping all of them in as tight a shield as she could manage. She anchored it in the ground, reaching down to firm rock.
“Gennie, if you have any energy to spare, help hold that bubble, please.”
“Of course, Momma Earth.”
She could sense the girl’s smile, and somehow, that gave her hope.
Earth Splits, Pillars of the Empire Book 1
(Finally making progress!)
AXObserver is one of the worst APIs I’ve ever seen out of Apple.
1. There are no proper constructors on the class, so you have to call static methods (that aren’t even defined on the class) to create a new instance.
2. Said functions don’t return the object to you. You have to create the object, and then pass it as a pointer, checking the return value of the create function for error codes! What? This is Swift, not Objective-C, how about you get the hell away from me with your out pointers and error codes?
3. They’re required to be locked to a specific process. This means I can’t observe system-wide accessibility events without iterating through a list of visible processes and installing an observer for each and every one of them. This could be made more efficient by only doing it if it has foreground focus, but who in their right mind thought this was a good idea?
Someone please tell me there’s at least a better way to observe accessibility events system wide? Please? I can get past the unfriendly API, but not… that.
Devin Prater :blind: reshared this.
Devin Prater :blind: reshared this.
Once #WWDC23 goes live, I'll have my usual thread of #accessibility related content here. I used to do this on Twitter. It involves discussing the announcements as well as testing of features announced prior to and throughout WWDC. I will welcome questions and requests for testing. I think I'll like doing this here much more than I did on Twitter. Oh and this thread will probably go on for months. I'll post things periodically as updates come.
I know you want to try the latest and the greatest. But you also want to be completely sure that what you need day-to-day will work on your device. So be absolutely sure that you're not going to need your phone, or your iPad for important work or something else. You'll here me talk about possible bugs and behaviors along with testing notes on this thread. Being a rather technical user, I know how to get out of situations that I put myself in.
So, Just be careful!
Oh, and don't expect me to live-toot or something like that. I might occasionally pop-in and say something that interests me about #WWDC. A lot of #accessibility announcements have been prebaked when Apple announced all the major changes and features coming to their operating systems in May. If I see something that will impact accessibility, I'll note it during the keynote or, if I think of it, after the keynote.
All right. #WWDC is over and here's a general thought that I'm left with. The #accessibility story for the #VisionPro product is going to be a little mixed until we get to hear more details about people's experiences. That said, if Apple can actually deliver what they said they'd deliver, accessibility story for some people become a lot more intriguing. Partially sighted people may find this device far more useful than a phone.
Having flexible gesture based controls could be much more useful disabled people with motor and/or some physical disabilities. As game controllers and other devices such as bluetooth keyboards / pointers will be available, switch controls will also probably work. And eye based navigation will work quite well.
This, of course, poses some significant concerns for #blind people who will not be able to interact using eyes. Even authentication will be a challenge.
So. I'm waiting for Apple to answer some of those questions. For #blind people, performing finger based gestures will most probably work the same as it does with devices with touch surfaces. Since, any surface will do, gestures could be performed without having to worry about the device.
Nothing to say that the #accessibility story for the #VisionPro device may not be as mature as we think it will be. We'll just have to wait and see.
A bit of a concern for physical/motor disabilities: We have to consider the initial step of putting the device on. It may not be doable independently.
Some of the ergonomic issues also can be either exacerbated or could be relieved by the interface shown in #VisionPro.
Keep in mind that these are a lot of random thoughts just based on what I heard in the Apple keynote. There will be a lot of discovery over the week as we see APIs and discussions.
Stay Tuned.
As a blind person, what excites me just by watching this is the strong possibility of having apps such as Envision, Seeing AI and others take advantage of this new #VisionPro platform to bring their apps on it. It's far more powerful than anything that we've seen before. It erases a lot of challenges with phone cameras.
All of this excitement aside, the device is certainly going to be bloody expenseive. So whether disabled people will be able to afford this will be another question entirely. The question will be whether to purchase stand-alone hardware like Envision's glasses or #VisionPro with the possibility that apps like Aira,, Seeing AI, and others will show up here. A device that allows for consolidated #accessibility experience or something that stands alone?
On the other hand, Apple updating Airpods to make them more useful for #accessibility purposes is extremely good. Adaptive hearing based on the environmental sounds will be quite useful for hearing health as well as quite a few deaf and hard of hearing people. Apple is silently making Airpods Pro their true hearing show piece.
Note that I'm not sure whether these features will be limited to Airpods Pro yet. More when I know.
#WWDC WWDC23
Now, let me see if I could take a look at some other iOS/Mac OS features that could benefit #accessibility.
The live voicemail transcripts for phone calls, which allows you to review voicemails as they are being recorded are great for everyone. But better yet, for #accessibility. Transcripts also come to voice messages. Great for Deaf/HH people.
Better keyboard suggestions through ML--note Apple never said the word #AI, will enable speedy typing.
I'm choosing to see the new Airdrop features as a major #accessibility winn in iOS/iPad OS. Lot of the new features are based on convenience. And they work quite nicely to enhance accessibility.
This is something new. I'd never expected Eloquence on iOS 17 to have a lisp. Lol.
Either Downloading Siri voices takes a long time on iOS devices or the functionality is broken in iOS 17 Dev beta 1. Trying to test Siri Voices for Voiceover since the neural voices are supposed to be available rather than natural voices.
7 #accessibility developer sessions at #WWDC this week. Woe. Seems like we will find out a little about #VisionOS's accessibility since there is a session that will show developers about how to build accessible experiences. This answers some of the questions. We will have access to that new platform. since iOS and iPadOS frameworks are built-in, I was pretty sure.
Sessions will also sshow devs how to adapt the new Assistive framework and more.
Brandon reshared this.
Lots of little buggies to file. Guess what I'll be doing while my body refuses to get sleep?
Before I go do some other things like take a break, I'll end on a positive note for tonight. So far, I haven't seen the Braille panning bug that was in iOS 16.5 I also haven't had an eloquence crash, though it's pretty early for that.
I fully intend to test Braille. So let me know if there's something specific you want me to test, not only Braille but other things as well.
Couple of notes on iOS 17 and speech:
The lisp that Eloquence is displaying is related to the higher sampling rate that's now configurable through voice settings. I believe I have it set properly to let me listen comfortably.
For Eloq, each voice can be configured settings such as pitch, pitch offset, head size, etc.
Devin Prater :blind: reshared this.
Eloquence's dictionary issue still exists. Navada Access will continue to be a thing for now. Let's see if we can change this. These settings also exist for Eloq on the Mac.
Settings are also available for Vocalizer voices. The voice must be active or in language rotor to configure. These settings are related to things such as pauses.
Devin Prater :blind: reshared this.
The instability I'd reported for Siri voices was nothing more than growing pains for the initial dev beta release. Some people might have seen the original Siri voices that were incredibly large, over 350 MB in size. After a little while, the new Siri voices showed up for download. As far as I can tell, these are all under 70 MB in size. They are quite responsive. I've tested the American ones for now.
I'll create audio demos later in the day.
As I suspected, Voiceover will be on the #VisionPro device and we'll be able to use the finger as a pointer rather than eyes. Now we just need to know how authentication will work.
Speaking of authentication and signing into devices, one of the features that Apple didn't discuss for iOS/iPad OS was the ability to signin to a new device if you have nearby devices already signed in.
I'll be eager to watch tthe #WWDC session that discusses the ability for developers to conduct accessibility audits in their own apps. It's one of the 7 #accessibility sessions.
https://developer.apple.com/documentation/xctest/xcuiapplication/4191487-performaccessibilityaudit
performAccessibilityAudit(for:_:) | Apple Developer Documentation
There's never been a better time to develop for Apple platforms.Apple Developer Documentation
The full #accessibility story for the #VisionOS and the #VisionPro initial device is far more impressive than I'd imagined. Then again I shouldn't be surprised. Even if Apple has a checkered past about maintaining some of the features properly as they should, they do have an ethos of making products fully accessible from the start. The list of accessibility features supported out of the box on these new devices is in the next post, which I gathered from the State of the Union.
Full list of #accessibility.
VoiceOver
Braille
Audio Descriptions
Support Per-App Settings
Zoom
Dynamic Type
Reduce Transparency
Reduce White Point
Color Filters
Bold Text
Voice Control
Spoken Content
Background Sounds
Pointer Control
Dwell Control
Button Shapes
Accessibility Shortcut
AssistiveTouch
MFi Hearing Devices
Subtitles and Closed Captions
Switch Control
Full Keyboard Access
Image Descriptions
Guided Access
Reduce Motion
Left/Right Balance
Mono Audio
Looking through a couple of interesting API additions from #WWDC related to #accessibility.
If your app has a custom video player--I'm looking at you the app formerly known as HBO Max, you can automatically detect flashing lights and dim them.
MAFlashingLightsProcessor | Apple Developer Documentation
A class that processes a framebuffer object to detect and dim sequences of flashing lights.Apple Developer Documentation
The more I use iOS 17 and iPadOS 17, the more I suspect that the automatic image descriptions feature has gotten an upgrade. I'm getting better descriptions than I was in iOS 16. I don't think it's my imagination. Could others confirm this? I'll have to set up scenarios where I get descriptions from the same source with both OS'. I think this could be fun or tedius. For now, I'll take it as fun.
Apple is truly making the betas available to everyone. So I'll remind you again not to install betas, no matter how stable you think they are, on primary devices. That said, iOS 17 betas are remarkably stable. Things could change. I've seen things breaking down in subsequent betas based on what I've seen. I'm not surprised about the stability in this release since most features are nothing major unlike iOS 16. Same re iPad OS and Mac OS.
Just be careful!
Allright then, Here are some details about how #accessibility will work on #VisionPro.
#Voiceover has its own gestures similar to iOS.
* Pinch with thumb and index finger with the right hand to move forward (equivalent to flick right).
* use the thumb and the middle finger to pinnch in order to move back (equivalent to flick left).
* Pinch with thumb and ring finger on the right hand or thumb and index finger on the left to select (equivalent to double tap).
Brandon reshared this.
The crown will allow you to assign a shortcut to the #accessibility option of your your choice. Tripple tap it on the #VisionPro device to launch.
Reality Kit has a new accessible component, which allows the assigning of accessibility properties. There are custom labels, values, and traits as well as custom rotors, custom actions, and custom content. Activate and ajustable system actions can also be assigned.
#WWDC WWDC23
As I'm trying to simulate these gestures by putting my hands on my lap, turning up my palms, and bringing various fingers together, I'm finding the interaction quite natural and faster than on a slab of glass. Since I don't have to hold anything, it could be very efficient.
Switching on Voiceover on #VisionPro turns off normal gestures to avoid the system confusing the user. There is a new direct gesture mode, however, that will enable developers to allow #blind people to interact with apps and actions. We'll be able to choose if we want to invoke the direct gesture mode. If an app is developed well, this will be like direct touch. I suspect it might be used more here though.
Developers will need to do a couple of things differently or provide additional information like spacial awareness to Voiceover users on #VisionOS apps. It also includes gesture recognition announcements for direct gestures. System actions will allow additional capabilities. It will be necessary to make announcements about important events in your apps such as entering a room, or object showing up and its position. Think games.
#accessibility #WWDC #WWDC 23
Voiceover gestures will also rely on multiple pinches, holds, and multiple fingers, all pretty seemingly simple for now.
Devs will need to be aware of dynamic type issues as well as contrast ratios.
New concepts for spacial computing. Anchors can be used to position objects relative to other objects, relative to hand, or other things in the world. Things can also be anchored to the camera so that it appears on the same spots oon displays. Content follow your head.
Anchors are important for Zoom users. Depending on how they're used, they could impact partially sighted users. certain anchors will need to be avoided so that low vision users can get closer to objects with Zoom to identify and read. Positioning the Zoom lense to read head anchors might be difficult.
Devs will need to be especially aware of motion in #VisionPro apps since reduced animation option is available for #accessibility purposes. Alternatives will be needed for reduced motion, zooming motion effects, rapid rotation effects, etc. APIs and notifications will be available. Using cross-fading could help.
#accessibility features include inputs to accommodate people who use alternative input. The Dwel control accessibility features allows an alternative set of gestures for motor/physical disabilities without using hands. It includes, tap, scroll, long-press, and ddrag. Think of this as no different than inputs like game controllers. Pointer control will also be available instead of eye tracking. Head movement, wrist position or index finger as alternatives.
To conclude this #VisionPro #accessibility recap from the video, there are multiple considerations for accessible apps including excellent quality captioning for spoken content. And spacial awareness of captions and sounds will need to be ensured by developers. In short, turn on these options through accessibility settings and see how your app behaves. Use SwiftUI frameworks if possible. Use accessibility audits when developing your apps. IN short, make your apps accessible.
Now to the feature that more than one of you have asked me about. That is, using the camera to read panels by pointing fingers at buttons on appliances.
Other than the fact that it's using the Arabic voice to speak in English, which is terrible, the feature works rather well. They need to iron out a couple of bugs there. In a way, it's quite funny. I'll try to do a couple of demos of these things tomorrow if I have time.
Now that I've had a couple of days to think about #VisionOS and #VisionPro, I'm contemplating writing a detailed post on some of the implications for #accessibility. There were quite a few questions answered. Some still remain. What say you?
If I do this, I need to check on the status of my web site. Inaugural post, may be?
#WWDC
If you think Apple is not interested in what everyone is calling #AI, just look at the #WWDC sessions on machine learning. They've had these sessions for years. While Microsoft, Google, and Amazon try to sell their cloud infrastructure for #ML use, Apple's already got their platform in play and has done so for those multiple years.
I'm liking the changes made to the Sharesheet in iOS 17 this year. I wonder if the sharesheet uses ML to determine the best apps you share with and the context. So far I like it.
This code change was spotted by 9to5Google. In an upcoming release of Android Studio Canary with the latest AVD Manager, when you select an Android TV 13 image, you'll see that "Tiramisu is an unsupported Android TV version."
https://9to5google.com/2023/06/07/google-tv-android-14-beta/
https://android.googlesource.com/platform/tools/adt/idea/+/f18242b7320bde0329785077c4a2e28a4483110f
Google releases Android TV 14 Beta, ditches Android 13
Coinciding with the release of Android 14 Beta 3 for Pixel devices, Google has also launched a preview of what...Kyle Bradshaw (9to5Google)
I can now corroborate this. A source has informed me that Google informed Android TV partners a few days back that they would no longer certify builds based on Android TV 13.
They didn't really give a reason why they're discontinuing Android TV 13, just that they're increasing their focus on Android TV 14, which could be a big release for the platform.
I have reached out to Google for comment on this news.
Emmanuele Bassi
•Emmanuele Bassi
•Juanjo Salvador
•Emmanuele Bassi
•Juanjo Salvador
•belenpena
•