after the 3rd of 4th time of having to reinstall Win11 because the ONLY solution to fixing the wifi adapter that Windows always insisted on deleting was to reinstall the OS I switched to linux. I’ll never go back.
I mean, I think that they had some real data privacy issues with the “screenshot everything by default” stuff.
However, my bet is that at some point in time, it will be the case that we do wind up in a situation where you do have some kind of system that is processing your data and doing some kind of analysis so that you can make use of it. It may not leave your system (though OS vendors providing storage and “cloud backup” services are a thing with Apple and Google and Microsoft today). But think of, for example, search. On most systems today, there is something that is trawling through your data, building an index, processing it, and putting it in some form via which you can access it. On mine, I don’t run a full-text indexing thing, but those certainly do exist.
That doesn’t mean that these particular laptops will be what does it, or that MIcrosoft’s CoPilot software — in its present form, at any rate, though if they use it as their “brand” for all their machine learning stuff, it could do all sorts of stuff — will be what uses it, or even that laptops will start doing a lot more local processing in general. But I think that the basic thing that they are shooting for, which is in some way, the PC extracting information from your data and then using it in some way to provide more-useful behavior, probably will be a thing.
On emacs, by default, M-/ runs dabbrev-expand. That looks through all buffers and, depending upon how you have things set up, possibly some other data sources, and using that as a set of potential completions, tries to complete whatever word is at point in the current buffer. Think of it as sort of a “global tab complete”. That’s a useful feature to have, and it’s a — much simpler — example of something where the system can look at your data and generate something something useful from it via doing some processing. That’s been around for a long time.
A number of systems will look at one’s contacts list to provide completions or suggestions.
It’s pretty much the norm — though I presently have it off on my Android phone, because the soft keyboard I use, Anysoft, presently has some bug that causes it to insert bogus text in some applications — on mobile systems to have some kind of autocorrection for text input, and that usually has some kind of “user dictionary”, and at least on some soft keyboards I’ve seen, automatic learning to generate entries for that “user dictionary” based on past text that you’ve entered is a thing.
Speech recognition hasn’t quite become the norm for interfacing with a computer the way a lot of sci-fi in the past portrayed it (though there are some real hands-free applications for driving), but on systems that do use it, it’s pretty much the norm to learn from past speech and use it to improve current recognition.
Those are all examples of the system shuffling through your data and using it to improve perfomance elsewhere. They certainly can have data privacy issues, like malicious soft keyboards on mobile OSes having access to everything one types. But they have established themselves. So I kind of suspect that Microsoft’s basic idea of ramming a lot of your data — maybe not everything you see on the screen — into a machine learning system and then using that training to do something useful for you is going to probably be a thing at some point in the future.
EDIT: There have been situations in the past where a new technology came out and companies tried really hard to find a business application for it, and it never really went anywhere. One example of that is 3D hardware rendering. When 3D hardware came out, outside of CAD and architecture, it was mostly used for video games. There were a some companies who tried figuring out ways to get companies to spend on it because they could do something useful for them. I remember Intel showing rotating 3D charts and stuff. Today, we still mostly use 3D hardware rendering for playing video games, and there isn’t much of a business case for it.
My guess is that that probably won’t be the fate of with general machine learning running on our data. That is, there may be more data privacy safeguards put into place, or data might not leave our systems, or learning might happen at a different places than based on screenshots, or might not actually run on a laptop, or any number of things. But I suspect that someone is going to manage to produce some kind of system that leverages parallel processing to run a machine learning system that is going to perform tasks that businesses find valuable enough to spend on it.
Hell yes they are, empowering more people to switch to Linux!
after the 3rd of 4th time of having to reinstall Win11 because the ONLY solution to fixing the wifi adapter that Windows always insisted on deleting was to reinstall the OS I switched to linux. I’ll never go back.
I mean, I think that they had some real data privacy issues with the “screenshot everything by default” stuff.
However, my bet is that at some point in time, it will be the case that we do wind up in a situation where you do have some kind of system that is processing your data and doing some kind of analysis so that you can make use of it. It may not leave your system (though OS vendors providing storage and “cloud backup” services are a thing with Apple and Google and Microsoft today). But think of, for example, search. On most systems today, there is something that is trawling through your data, building an index, processing it, and putting it in some form via which you can access it. On mine, I don’t run a full-text indexing thing, but those certainly do exist.
That doesn’t mean that these particular laptops will be what does it, or that MIcrosoft’s CoPilot software — in its present form, at any rate, though if they use it as their “brand” for all their machine learning stuff, it could do all sorts of stuff — will be what uses it, or even that laptops will start doing a lot more local processing in general. But I think that the basic thing that they are shooting for, which is in some way, the PC extracting information from your data and then using it in some way to provide more-useful behavior, probably will be a thing.
On emacs, by default,
M-/
runsdabbrev-expand
. That looks through all buffers and, depending upon how you have things set up, possibly some other data sources, and using that as a set of potential completions, tries to complete whatever word is at point in the current buffer. Think of it as sort of a “global tab complete”. That’s a useful feature to have, and it’s a — much simpler — example of something where the system can look at your data and generate something something useful from it via doing some processing. That’s been around for a long time.A number of systems will look at one’s contacts list to provide completions or suggestions.
It’s pretty much the norm — though I presently have it off on my Android phone, because the soft keyboard I use, Anysoft, presently has some bug that causes it to insert bogus text in some applications — on mobile systems to have some kind of autocorrection for text input, and that usually has some kind of “user dictionary”, and at least on some soft keyboards I’ve seen, automatic learning to generate entries for that “user dictionary” based on past text that you’ve entered is a thing.
Speech recognition hasn’t quite become the norm for interfacing with a computer the way a lot of sci-fi in the past portrayed it (though there are some real hands-free applications for driving), but on systems that do use it, it’s pretty much the norm to learn from past speech and use it to improve current recognition.
Those are all examples of the system shuffling through your data and using it to improve perfomance elsewhere. They certainly can have data privacy issues, like malicious soft keyboards on mobile OSes having access to everything one types. But they have established themselves. So I kind of suspect that Microsoft’s basic idea of ramming a lot of your data — maybe not everything you see on the screen — into a machine learning system and then using that training to do something useful for you is going to probably be a thing at some point in the future.
EDIT: There have been situations in the past where a new technology came out and companies tried really hard to find a business application for it, and it never really went anywhere. One example of that is 3D hardware rendering. When 3D hardware came out, outside of CAD and architecture, it was mostly used for video games. There were a some companies who tried figuring out ways to get companies to spend on it because they could do something useful for them. I remember Intel showing rotating 3D charts and stuff. Today, we still mostly use 3D hardware rendering for playing video games, and there isn’t much of a business case for it.
My guess is that that probably won’t be the fate of with general machine learning running on our data. That is, there may be more data privacy safeguards put into place, or data might not leave our systems, or learning might happen at a different places than based on screenshots, or might not actually run on a laptop, or any number of things. But I suspect that someone is going to manage to produce some kind of system that leverages parallel processing to run a machine learning system that is going to perform tasks that businesses find valuable enough to spend on it.