Nah, I wouldn't call it outsourcing. They have AI usage KPIs. [1]
> "We need to get beyond the arguments of slop vs sophistication..."
> "We need to make deliberate choices on how we diffuse this technology in the world as a solution to the challenges of people and planet," Nadella says. "For AI to have societal permission it must have real world eval impact."
I was not speaking to just one case. Today's incident, is _the norm_.
These attacks are widespread, damaging, and the repercussions are felt for decades in their wake. We _are_ being carpet bombed, and the costs for the victims are ongoing and growing. The collateral damage is everywhere.
Do you really think there's no impact?
> Cyber units from at least one nation state routinely try to explore and exploit Australia’s critical infrastructure networks, almost certainly mapping systems so they can lay down malware or maintain access in the future.
> We recently discovered one of those units targeting critical networks in the United States. ASIO worked closely with our American counterpart to evict the hackers and shut down their global accesses, including nodes here in Australia.
I guess I shouldn't be drawn by someone calling me an idiot...
But one last try.
You suggested that the cost of cyberattacks on industry, is not so great as when we were destroying it with bombs instead.
However, every time we have power outages, people die. Then we have the cost of securing the infrastructure. And the cost of everyone else affected, who has to increase their resilience.
Your bank is collateral damage, as is the people freezing to death in their homes. Entire industries are on the verge of collapse - getting a new turbine to help stabilise your grid has a lead time of _years_, not days or weeks. And if you hit weeks, people die.
Insurance responds to attacks, and that trickles out to everywhere that is touched. VISA and MasterCard have to prepare for eventualities, because of attacks not aimed at them, but at power infrastructure.
When power is hit... There is nothing unaffected.
Volt Typhoon hit the US power grid, and required a massive multinational effort to extract them, that took almost a year... And VT wasn't intended to do damage, just look for weak spots. So that next time, they can cause damage. As part of that survival process, various hardware partners were kicked to the curb, and the repercussions are still in the process of being felt. Half the industry may have issues surviving because of it.
Industroyer is one of the reasons that Kyiv got as bad as it did. Malware is not some hand-wave and fix thing. Half the city's relays were permanently damaged.
Then of course, there was Stuxnet. Which blew up centrifuges, and the research centres hit are still trying to recover from where they were, then.
Cyberattacks are a weapon of war, people die, industries die, and there is no easy path to recovery following it.
An entire industry exists, just to defend against these kinds of attacks. The money spent on that, is counted, which means it has to be less than the cost of the attack succeeding. Trillions are spent, because there is absolute weight behind surviving these attacks.
If things were easier, it'd be an industry solely focused on backups and flipping a switch. But it's not.
All "Global Reader" accounts have "microsoft.directory/bitlockerKeys/key/read" permission.
Whether you opt in, or not, if you connect your account to Microsoft, then they do have the ability fetch the bitlocker key, if the account is not local only. [0] Global Reader is builtin to everything +365.
> Because hypotheticals that they could are not useful.
Why? They are useful to me and I appreciate the hypotheticals because it highlights the gaps between "they can access my data and I trust them to do the right thing" and "they literally can't access my data so trust doesn't matter."
Considering all the shenanigans Microsoft has been up to with windows 11 and various privacy, advertising, etc. stuff?
Hell, all the times they keep enabling one drive despite it being really clear I don’t want it, and then uploading stuff to the cloud that I don’t want?
I have zero trust for Microsoft now, and not much better for them in the past either.
This 100% happens, they’ve done it to at least one of my clients in pretty explicit violations of HIPAA (they are a very small health insurance broker), even though OneDrive had never been engaged with, and indeed we had previously uninstalled OneDrive entirely.
One day they came in and found an icon on their desktop labeled “Where are my files?” that explained they had all been moved in OneDrive following an update. This prompted my clients to go into full meltdown mode, as they knew exactly what this meant. We ultimately got a BAA from
Microsoft just because we don’t trust them not to violate federal laws again.
What do Entra role permissions have to do with Microsoft's ability to turn over data in its possession to law enforcement in response to a court order?
That's for Entra/AD, aka a workplace domain. Personal accounts are completely separate from this. (Microsoft don't have a AD relationship with your account; if anything, personal MS accounts reside in their own empty Entra forest)
> MS doesn't have a magic way to reach into your laptop and pluck the keys.
Of course they do! They can just create a Windows Update that does it. They have full administrative access to every single PC running Windows in this way.
It's largely the same for all automatic updating systems that don't protect against personalized updates.
I don't know the status of the updating systems of the various distributions; if some use server-delivered scripts run as root, that's potentially a further powerful attack avenue.
But I was assuming that the update process itself is safe; the problem is that you usually don't have guarantees that the updates you get are genuine.
So if you update a component run as root, yes, the update could include malicious code that can do anything.
But even an update to a very constrained application could be very damaging: for example, if it is for a E2EE messaging application, it could modify it to have it send each encryption key to a law enforcement agency.
> the problem is that you usually don't have guarantees that the updates you get are genuine
A point of order: you do have that guarantee for most Linux distro packages. All 70,000 of them in Debian's case. And all Linux distro distribute their packages anonymously, so they can never target just one individual.
That's primarily because they aren't trying to make money out of you. Making money requires a billing relationship, and tracking which of your customers own what. Off the back of that governments can demand particular users are targeted with "special" updates. Australia in particular demands commercial providers do that with its "Assistance and Access Bill (2018)" and I'm sure most governments in the OECD have equivalents.
Not really, but it's quite complex for Linux because there are so many ways one can manage the configuration of a Linux environment. For something high security, I'd recommend something like Gentoo or NixOS because they have several huge advantages:
- They're easy to setup and maintain immutable and reproducible builds.
- You only install the software you need, and even within each software item, you only build/install the specific features you need. For example, if you are building a server that will sit in a datacentre, you don't need to build software with Bluetooth support, and by extension, you won't need to install Bluetooth utilities and libraries.
- Both have a monolithic Git repository for packages, which is advantageous because you gain the benefit of a giant distributed Merkle tree for verifying you have the same packages everyone else has. As observed with xz-utils, you want a supply chain attacker to be forced to infect as many people as possible so more people are likely to detect it.
- Sandboxing is used to minimise the lines of code during build/install which need to have any sort of privileges. Most packages are built and configured as "nobody" in an isolated sandbox, then a privileged process outside of the sandbox peeks inside to copy out whatever the package ended up installing. Obviously the outside process also performs checks such as preventing cool-new-free-game from overwriting /usr/bin/sudo.
- The time between a patch hitting an upstream repository and that patch being part of a package installed in these distributions is fast. This is important at the moment because there are many efforts underway to replace and rewrite old insecure software with modern secure equivalents, so you want to be using software with a modern design, not just 5 year old long-term-support software. E.g. glycin is a relatively new library used by GNOME applications for loading of untrusted images. You don't want to be waiting 3 years for a new long-support-support release of your distribution for this software.
No matter which distribution you use, you'll get some common benefits such as:
- Ability to deploy user applications using something like Flatpak which ensures they are used within a sandbox.
- Ability to deploy system applications using something like systemd which ensures they are used within a sandbox.
Microsoft have long underinvested in Windows (particularly the kernel), and have made numerous poor and failed attempts to introduce secure application packaging/sandboxing over the years. Windows is now akin to the horse and buggy when compared to the flying cars of open source Linux, iOS, Android and HarmonyOS (v5+ in particular which uses the HongMeng kernel that is even EAL6+, ASIL D and SIL 3 rated).
Furthermore it seems like it's specific to Azure AD, and I'm guessing it probably only has effect if you enable to option to back up the keys to AD in the first place, which is not mandatory
I'd be curious to see a conclusive piece of documentation about this, though
Regular AD also has this feature, you can store the encryption keys in the domain controller. I don't think it's turned on by default, but you can do that with a group policy update.
In my twenty years, I've rerolled famous algorithms "every now and then".
Its almost wild to me that you never have.
Sometimes you need a better sort for just one task. Sometimes you need a parser because the data was never 100% standards compliant. Sometimes you need to reread Knuth for his line-breaking algorithm.
I was more exposed to the destruction by the javascript web of the built-in("standard") "accessibility" (for blind people) of the noscript/basic (x)html aka classic web (or "braille terminals").
One of the powers of wayland is dynamic discovery: a GUI application has to query the compositor interfaces then features, then turn on and off stuff dynamically (for instance, the clipboard).
Which "accessibility" x11 APIs you are talking about?
Under X11, we had Extended Window Manager Hints [0]. The new protocol that is meant to work everywhere is ATK/AT-SPI. However, as mentioned in the posted proposal, AT-SPI Device can't be implemented under Wayland, right now.
Under Wayland, you also can't find your mouse. [1]
No input system can find out your active window, so no remapping for applications. [2]
And no key rebindings, at all, yet. [3]
The issue I posted that started this thread was proposed by Orca. This is not some random person saying that Wayland can't do what they need, yet. Its one of the most popular accessibility software programs available on Linux.
Libinput doesn't have capability monitoring, saying instead it'll need to be implemented by the server, so it needs to be Wayland's problem, not theirs. [4]
Invasive complex features and the disaster of ICCCM, I do not want any GUI applications with that much power over the compositor.
Then, the power of wayland has to be leveraged the right way: a set of custom and clean accessibility/instrumentation wayland protocols, queried dynamically for support or not by GUI applications which handle that complexity. A lot is now directly client side though.
You will have probably to maintain a set of forked/branched compositors with this accessibility/instrumentation code, which would be deployed as an alternative only on demand.
Stop engaging in bad faith please, it is not what I said.
You are asking the wrong people, ask the the AT-SPI/ATK and GUI toolkit people to design the required interfaces (leveraging the dynamicity of wayland or some other custom and simple protocols perhaps) and to devel/maintain the required complexity for those interfaces to work.
Those interfaces are beyond instrusive which defeats client application isolation and compositor independance of niche complexity which is a corner stone of wayland. That's why those complex compositors (or "modules" of some huge compositors) will be on demand only (and they are highways for malware and spyware).
> ask the the AT-SPI/ATK and GUI toolkit people to design the required interfaces
As I've already pointed out, multiple times, those are the people asking Wayland for the necessary protocols, so that they _can_ design the required interfaces.
KDE has pretty much given up, and kwin is a fork with a ton of extensions [0]. Because Wayland always says no.
Gnome does the same, as does wlroots and sway. Which means that all of them have incompatible protocols, meaning accessibility is sharded between desktop environments. Your apps, that you need just to press a key, are all incompatible with each other.
Accessibility is not some niche thing. It is a cornerstone of interface design, that assists everyone who interacts with it in some way.
Your view is very simple: Security trumps accessibility. That has been obvious since the first post.
My view is simpler: I am allowed to exist, and so security must make considerations for accessibility.
As things stand, both Windows and macOS have a better accessibility story than Linux, because of this dogged approach.
"As I've already pointed out, multiple times, those are the people asking Wayland for the necessary protocols, so that they _can_ design the required interfaces."
Your are not making any sense at all: it is up to the AT-SPI/ATK people to design their own set of wayland interfaces related to their definition of accessibilty and to code/maintain their related software (which could be compositors, or modules of compositors). Wayland being a set of interfaces which are fully discoverable and dynamic at runtime makes all that possible.
If that was the case... Why would those same people be putting forward protocols to implement? Why do they have to fight for Wayland just to say no?
Wayland is the gutter sink here. Nobody else. Everyone else has done what they can, and continue to do what they can. Wayland has said no to the very interfaces you claim need to be implemented - they already have been.
I'd add nuance to Hermans' work. Its not all experiment blind, but also not feedback-less. They advocate for "direct instruction", not just rote learning.
> As that is not a surprise, since research keeps showing that direct instruction—explanation followed by a lot of focused practice—works well.
There’s a pretty rich literature around this style of pedagogy going back for decades and it is certainly not a new idea. My preferred formulation is Vygotsky’s “zone of proximal development” [1], which is the set of activities that a student can do with assistance from a teacher but not on their own. Keeping a student in the ZPD is pretty easy in a one-on-one setting, and can be done informally, but it is much harder when teaching a group of students (like a class). The. Latter requires a lot more planning, and often leans on tricks like “scaffolded” assignments that let the more advanced students zoom ahead while still providing support to students with a more rudimentary understanding.
Direct instruction sounds similar but in my reading I think the emphasis is more on small, clearly defined tasks. Clarity is always good, but I am not sure that I agree that smallness is. There are times, particularly when students are confused, that little steps are important. But it is also easy for students to lose sight of the goals when they are asked to do countless little steps. I largely tuned out during my elementary school years because class seemed to be entirely about pointless minutiae.
By contrast, project work is often highly motivational for students, especially when projects align with student interests. A good project keeps a student directly in their ZPD, because when they need your help, they ask. Lessons that normally need a lot of motivation to keep students interested just arise naturally.
Escaping the sandbox has been plenty doable over the years. [0]
WASM adds a layer, but the first thing anyone will do is look for a way to escape it. And unless all software faults and hardware faults magically disappear, it'll still be a constant source of bugs.
Pitching a sandbox against ingenuity will always fail at some point, there is no panacea.
> "We need to get beyond the arguments of slop vs sophistication..."
> "We need to make deliberate choices on how we diffuse this technology in the world as a solution to the challenges of people and planet," Nadella says. "For AI to have societal permission it must have real world eval impact."
> https://www.windowscentral.com/microsoft/microsoft-ceo-satya...
[1] https://adoption.microsoft.com/files/copilot/Unlocking-AIs-I...
reply