For a AAA modern game, you're right - of course not.
But remember, they're saying this thing is six times as powerful as the Steam Deck.
That sounds like it might be able to handle 4 players running at about as good as on the Steam Deck.
And that's ignoring the whole idea of the server running on the Steam Machine, and custom clients running on everyone's Steam Frames. That would give you even more compute power.
I like the idea that the Steam Frame, in this "Living Room" scenario, doesn't even need to do network prediction. It just blasts game state to each of the clients.
So what, there's multi-player games on the same TV, like Goldeneye 64.
There's LAN games.
And now there's maybe a new category... Living Room, Multi-Screen games?
I'm frustrated by the error rate on my Eneloops over the years. I have dozens of them and I swear every other time I recharge them, one more starts blinking and refuses to recharge.
Also I would recommend switching to the IKEA rechargeable batteries which are supposedly the same thing except cheaper.
Inkjets are the best bang for the buck. I had good luck with higher-end Epson printers (with good gloss/matte photo paper). The ink is much better at remaining viable for a long time, and no longer freaks out, whenever the relative humidity goes up.
With inkjets, though, you need to keep using them. Otherwise, the ink clogs.
Expensive process printers have wide gamuts. Laser printers basically suck. Xerox used to make decent color laser printers, but they had an odd “waxy” ink. Not sure if they still do it.
I don’t think anyone does dye-sub printers, anymore. They used to be good.
I found out that in the embedded world (think microcontrollers without an MMU), Tensorflow lite is still the only game in town (pragmatically speaking) for vendor-supported hardware acceleration.
Not anymore, especially after other routers like Vercel's AI Gateway and proxies from LLM providers like Fal, DeepInfra, and AtlasCloud didn't get the memo of enforcing BYOK for ID verification required models after GPT-5's release.
Theoretically yes, in practice no. There is (according to my sensors) a fairly large CO2 increase inside a room when a modern furnace (with external exhaust) is running. I've confirmed this with several units (all made in the last 10 years), and it's not that the windows are closed - when the furnace turns off, the CO2 drops. And it's not that the exhaust is placed in a bad spot either.
Yes, fossil fuels are the best to keep pollution away, just need to installed perfectly, configured and maintained regularly, monitored to make sure everything is running correctly, and have additional properties lying around vacant just in case there are leaks, misconfigurations, poor installation, etc. But we must use fossil fuels, there are no other options!
I had a gas furnace that wasn't properly maintained as far as cleaning. Result: insufficient air flow for full combustion. Secondary result: CO build up in basement space. Tertiary result: asthma-like symptoms for me.
Your control for this test should be (and maybe was, you don't say) running the furnace circulation fan without running the burner. CO2 levels are unlikely to be uniform throughout a building, and thus mixing will change (raise, lower) the CO2 levels depending on where you're measuring.
There are large, large gaps of parallel stuff that GPUs can't do fast. Anything sparse (or even just shuffled) is one example. There are lots of architectures that are theoretically superior but aren't popular due to not being GPU friendly.
That’s not a flaw in parallelism. The mathematical reality remains that independent operations scale better than sequential ones. Even if we were stuck with current CPU designs, transformers would have won out over RNNs.
Unless you are pushing back on my comment "all kinds" - if so, I meant "all kinds" in the way someone might say "there are all kinds of animals in the forest", it just means "lots of types".
I was pushing back against "all kinds". The reason is that I've been seeing a number of inherently parallel architectures, but existing GPUs don't like some aspect of them (usually the memory access pattern).
I noticed that they renamed the Element mobile app to Element Classic. Has Element X reached feature parity and stability yet? For how long will Classic be maintained?
> The outgoing Element mobile app (‘classic Element’) will remain available in the app stores until at least the end of 2025, to ensure a smooth transition
I can't find any other communication from Element Creations other than that.
The renaming to Element Classic doesn't bode well considering that Element X still doesn't support a vast number of home servers and a number of Synapse authn/authz features.
If they remove it from the app store, my advice for my users is going to be to switch to fluffychat, and I'll eventually migrate away from Synapse to some flavor of Conduit.
Sorry to hijack this thread to ask - but what is the current state of sliding sync? Does it still require a separate proxy service to enable sliding sync if you're self-hosting a homeserver; or is it upstreamed into synapse? Also is there a list of clients that are sliding sync aware?
Not that many clients have actually adopted it though, because the MSC is still not 100% finalised - it's currently entering the final review stages at last now over at https://github.com/matrix-org/matrix-spec-proposals/pull/418.... Right now Element X uses it (exclusively), and Element Web has experimental support for it; I'm not sure if any others actually use it yet.
Good to know! These are very important features and not having them really gets in the way of switching off of classic. I am worried about "intial" support - what is going to break with threads and spaces that I try to join with thr new Element X?
As of lately, Spaces are now supported in Element X which possibly brings it to feature parity (at least I wouldn't know what's missing, and I've been using Element X now for some months because of these plans)
I regret to concur. On an iPhone PRO MAX with iOS 18.7-latest, my stopwatch says:
- Element X loads to list All Chats in 3 seconds.
- Element Classic loads to list All Chats in <1 second.
And Element X is supposed to be the "fast one", due to Rust SDK, etc. etc.
I'm giving Element X etc. the benefit of the doubt and will see them through.
But there NEEDS TO BE a user-advocate or project-manager just wailing on usability internally at Element. If you need such a person, find someone, and if you can't find anyone, hit me up, but I would think someone should be filling this role already.
In addition to bundling and network effects, one magic thing that helped grease the skids for some apps like AOL Instant Messenger or Facebook Messenger (in its glory days) or WhatsApp/Discord/Telegram or whatever gain very wide adoption was their relatively seamless user experience.
As much as the Parent sounds like complaining, I think it's complaining in good faith. We want Matrix to succeed.
Hm. Something sounds wrong here, then. On my iPhone 12 Pro Max on iOS 26, my account (~5000 rooms) opens in about 2s in Element X iOS. On the classic app it’s about 10s (ie unusable).
Roughly how many rooms are you in? and what server is this on (it could be a serverside problem)? And what precise build of the app?
> and what server is this on (it could be a serverside problem)?
It's a hosted SaaS personal homeserver. So yes, quite possibly a server-admin issue. I've just put in a ticket to find out.
EDIT: Synapse 1.139.0
> And what precise build of the app?
Element X Version 25.10.0 (190)
EDIT: After updating to Element X Version 25.10.1 (192) [latest Update from App Store], about 2 seconds is observed -- still slower than Classic, but a little better than before. I will still finish following up regarding Server issues/info with server admins; hopefully that fixes it.
Thanks a ton for all you do! Good to know it's not the expectation.
This is really surprising. Can you do a clean launch (ie kill the app and relaunch it) and then submit a bug report from both apps and let me know what mxid to look for? (DM to @matthew:matrix.org if needed). The logs will say where the time difference is coming from. EX should always be way faster than classic Element.
Your "good experience" on Element X iOS matches my "bad experience" on Element X iOS.
See, with my Server and Chats, Classic is actually very snappy:
- Element X: ~1.5 seconds avg (rounds to 2 sec if using a non-decimal stopwatch, but more like 1.5 when measured more precisely)
- Element Classic: ~0.6 second avg (actually slightly faster visually, this includes my response time to stop the timer, probably more like just around/under 0.5 sec)
Anyway, Classic is very fast for me to open. I like it a lot. It feels almost instant.
But X loads in 2-3 times the time. I sit there waiting for content to load, even if it's just for a second.
Because I really hope speed does not regress for people already with very fast load times in Classic, when X becomes the only flagship App in the App Store.
To be complete, for anyone following along: the above hypothesis was allegedly incorrect. 2 seconds is not supposed to be normal for so few chats. Element X is supposedly normally nearly instant to load & list chats for such a small number of chats.
So, I'll try to come back here and comment if I get it resolved.
what guide were you following that told you to install coturn for Element X? It shouldn't be /that/ surprising that Element X's group-capable calling requires a group call server, and in general most people seem happy to not have to worry about coturn given the server acts as a relay.
Element (not ElementX), the official/preferred app, works with coturn for 1-on-1 calls. But ElementX does not. IMO it is surprising to break 1-on-1 call functionality.
In Matrix 2.0, all calls are group capable (much like Matrix itself doesn’t specialcase DMs - they are just rooms with 2 people). So yes, no coturn is needed.
We haven’t got as far as interop between legacy Matrix 1:1 calling and Matrix 2.0 style MatrixRTC though, which I can see being annoying - but overall the admin burden should be simpler than running a coturn.
We’ll update the synapse docs to explain that coturn isn’t needed for MatrixRTC calls.
If you need an attention sink, try chess! Pick a time control if it's over 2 minutes of waiting, and do puzzles if it's under. I find that there's not much of a context switch when I get back to work.