Many places are still looking at options, and costs of switching. Where I work still is, even though we have a large Linux server fleet already. I expect this is on a 3-5 year plan to ramp up switching to something else for most companies that are going to switch.
- 0 Posts
- 29 Comments
A method not yet mentioned is by inode, (I’ve accidentally created filenames I didn’t know how to escape at the time like
--
or other command line flags/special characters)ls -li
Once you get the inode
find . -type f -inum $inode -delete
JollyGreen_sasquatch@sh.itjust.workstoUnited States | News & Politics@midwest.social•Goodbye to start-stop systems – the EPA under Trump concludes that they are not worth it and could disappear from new models1·29 days agoSo the cost of a tow or mobile mechanic + cost of a replacement starter + cost from alternate transport or loss of wages would take years to make up for, each.
JollyGreen_sasquatch@sh.itjust.worksto Technology@lemmy.world•The plan for nationwide fiber internet might be upended for StarlinkEnglish1·30 days agoTransmission loss/attenuation only informs the power needed on the transmission side for the receiver to be able to receive the signal. The wireless networks I am talking about don’t really have packet loss (aside from when the link goes down for reasons like hardware failure).
I mention Chicago to New York specifically because in the financial trading world, we use both wireless network paths and fiber paths between the locations and measured/real latency is a very big deal and measured to the nanoseconds.
So what I mention has nothing to do with human perception as fiber and wireless are both faster than most human’s perceptions. We also don’t have packet loss on either network path.
High speed/ high frequency Wireless is bound by the curvature of the earth and terrain for repeater locations. Even with all of the repeaters, measured latency for these commercially available wireless links are 1/2 the latency of the most direct commercially available fiber path between Chicago and New York.
Fiber has in-line passive amplifiers, which are a fun thing to read about how they work, so transmission loss/attenuation only applies to where the passive amplifiers are.
You are conflating latency (how long it takes bits to go between locations) with bandwidth (how many bits can be sent per second between locations) in your last line.
JollyGreen_sasquatch@sh.itjust.worksto Technology@lemmy.world•The plan for nationwide fiber internet might be upended for StarlinkEnglish3·30 days agoThe speed of light through a medium is what varies, since I have to deal with this at work, and the speed of light through air is technically faster than the speed of light through fiber. But now there is hollow core fiber that makes this difference less.
Between Chicago and New York the latency of the specialized wireless links commercially available is around about 1/2 of standard fiber taking the most direct route. But bandwidth is also only in gigabits/s vs terabits/s you can put over typical fiber backbone.
But both are faster than humans can perceive anyway.
JollyGreen_sasquatch@sh.itjust.worksto Technology@lemmy.world•First Look at Google’s Unfinished DeX-Like Desktop Mode for AndroidEnglish6·2 months agoThere are modern labdocks with usbc
JollyGreen_sasquatch@sh.itjust.worksto Android@lemdro.id•Google rolling out auto-restart security feature to AndroidEnglish28·3 months agoThe before first unlocked state is considered more secure, file/disk encryption keys are in a hardware security module and services aren’t running so there is less surface for an attack . When a phone is taken for evidence, it gets plugged into power and goes in a faraday bag. This keeps the phone in an after first unlock state where the encryption keys are in memory and more services that can be attacked are running to gain access.
JollyGreen_sasquatch@sh.itjust.worksto Linux@lemmy.ml•I need to vent about Windows. I want workplaces to use Linux.2·4 months agoIn Linux everything is a file. So modifying files is all you really need. The hardest part is how to handle mobile endpoints like laptops, that don’t have always on connections. Ansible pull mode is what we were looking at in a POC, with triggers on VPN connection. Note we have a large Linux server footprint already managed by ansible, so it isn’t a large lift for us.
JollyGreen_sasquatch@sh.itjust.worksto Linux@lemmy.ml•Plug-and-play development environment1·4 months agoTried this at work and discovered it only really works on vscode and probably eclipse. Other IDEs claimed support but it was found to be unusable.
JollyGreen_sasquatch@sh.itjust.worksto Linux@lemmy.ml•Plug-and-play development environment3·4 months agoI do agree mostly with your point here, but I think you can limit the scope a bit more. Mainly provide a working build environment via one of the mentioned tools, since you will need it anyway for a ci/cd pipeline. You can additionally have a full development environment that you use available for people to use if they choose. It is important that it be one regularly used to keep the instructions up to date for anyone that might want to try to contribute.
From my observations as a sys admin, people tend to prefer the tools they are familiar with, especially as you cross disciplines. A known working example is usually easy to adapt to anyone’s preferred tooling.
JollyGreen_sasquatch@sh.itjust.worksto Linux@programming.dev•Systemd Adding The Ability to Boot Directly Into A Disk Image Downloaded Via HTTP1·5 months agoModern UEFI in boxes has http boot options generally, and ipxe has supported http boot a long time. though I still get the grub2 bootloader bits over tftp, then http for kernel and initrd.
JollyGreen_sasquatch@sh.itjust.worksto Selfhosted@lemmy.world•*Permanently Deleted*English1·5 months agoThe lack of version is the problem. Syntax has changed over time, so when someone finds or has an older compose file, there is no hint it won’t work with the current version of docker-compose until you get errors and no graceful way to handle it.
JollyGreen_sasquatch@sh.itjust.worksto Selfhosted@lemmy.world•*Permanently Deleted*English1·5 months agoCompose doesn’t have a versioned standard, it did for a bit iirc, which also means you can’t always just grab a compose file and know it will always just work.
Most self hosted works fine with giant all in one containers, even for complex apps, it’s when you need to scale you usually hit problems with an all in one container approach and have to change.
JollyGreen_sasquatch@sh.itjust.worksto homeassistant@lemmy.world•Security/FULL self hosting? Looking for info before starting...English3·5 months agoIf Phillips wrote the plugin it might but all the plugins I have looked at are written by the community. Most plugins are only polling based, so they are scraping data into HAs recorder plugin.
JollyGreen_sasquatch@sh.itjust.worksto homeassistant@lemmy.world•Security/FULL self hosting? Looking for info before starting...English4·5 months agoBy syncing data, it isn’t all data, just that it requires non-local resources, ie cloud/API, to function. You do have to look at each integration to see what it is doing, I would expect a Spotify integration is just hitting the Spotify API and maybe can interact with local devices that Spotify can stream to (ie a Chromecast)
JollyGreen_sasquatch@sh.itjust.worksto ADHD@lemmy.world•[Update] I took two pills intead of one this morningEnglish3·7 months agoIt’s anything acidic from what I have found/read, the pamphlet that comes with mine mentions it. You can get a strong and longer lasting effect if you take calcium carbonate (aka antacids) around the same time.
JollyGreen_sasquatch@sh.itjust.worksto Selfhosted@lemmy.world•Paid SSL vs LetsencryptEnglish10·10 months agoThe main benefits to paying for certs are
- as many said, getting more than 90 days validity for certs that are harder to rotate, or the automation hasn’t been done.
- higher rate limits for issuing and renewing certs, you can ask letsencrypt to up limits, but you can still hit them.
- you can get certs for things other than web sites, ie code signing.
The only thing that matters to most people is that they don’t get cert errors going to/using a web site, or installing software. Any CA that is in the browsers, OS and various language trust stores is the same to that effect.
The rules for inclusion in the browsers trust stores are strict (many of the Linux distros and language trust stores just use the Mozilla cert set), which is where the trust comes from.
Which CA provider you choose doesn’t change your potential attack surface. The question on attack surface seems like it might come from lacking understanding of how certs and signing work.
A cert has 2 parts public cert and private key, CAs sign your sites public cert with their private key, they never have or need your private key. Public certs can be used to verify something was signed by the private key. Public certs can be used to encrypt data such that only the private key can decrypt it.
JollyGreen_sasquatch@sh.itjust.worksto News@lemmy.world•Disney says DeSantis-appointed district is dragging feet in providing documents for lawsuit21·2 years agoThat would still technically be a math problem. I’m not sure if it falls in combinatorics, statistics/probability, or scheduling, but I’ve had problems like this on math and cs exams.
We are a vdi shop too, so we have to be sure performance is at least as good. With a few other complicated setups, it is non-trivial to test alternatives.
Migration is usually shutting down the VM, exporting it from VMware, and importing on the other side. We have some really generous maintenance windows usually(ie basically 36-48 hours every weekend) and I would still expect it to take a year to migrate everything if we went all in.
We would be over 50 hypervisor hosts too, san connectivity, shared disks, and GPU accelerated vdi. It’s a lot to eval for each option and test it all works and figure out the caveats.